• View  PDF
  • Download full issue

Sustainable Operations and Computers

Understanding the role of digital technologies in education: a review.

  • Previous article in issue
  • Next article in issue

Cited by (0)

The Role of Technology in Education: Enhancing Learning Outcomes and 21st Century Skills

  • International Journal of Scientific Research in Modern Science and Technology 3(4):05-10
  • CC BY-NC 4.0
  • This person is not on ResearchGate, or hasn't claimed this research yet.

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations
  • Muhammad Tahir Khan Farooqi
  • Rashid Hussain
  • Edmond Kagambe
  • Jessica Chicago Citation Kabasiita

Maliko Kisembo

  • Aisha Namubiru
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Open access
  • Published: 12 February 2024

Education reform and change driven by digital technology: a bibliometric study from a global perspective

  • Chengliang Wang 1 ,
  • Xiaojiao Chen 1 ,
  • Teng Yu   ORCID: orcid.org/0000-0001-5198-7261 2 , 3 ,
  • Yidan Liu 1 , 4 &
  • Yuhui Jing 1  

Humanities and Social Sciences Communications volume  11 , Article number:  256 ( 2024 ) Cite this article

13k Accesses

11 Citations

1 Altmetric

Metrics details

  • Development studies
  • Science, technology and society

Amidst the global digital transformation of educational institutions, digital technology has emerged as a significant area of interest among scholars. Such technologies have played an instrumental role in enhancing learner performance and improving the effectiveness of teaching and learning. These digital technologies also ensure the sustainability and stability of education during the epidemic. Despite this, a dearth of systematic reviews exists regarding the current state of digital technology application in education. To address this gap, this study utilized the Web of Science Core Collection as a data source (specifically selecting the high-quality SSCI and SCIE) and implemented a topic search by setting keywords, yielding 1849 initial publications. Furthermore, following the PRISMA guidelines, we refined the selection to 588 high-quality articles. Using software tools such as CiteSpace, VOSviewer, and Charticulator, we reviewed these 588 publications to identify core authors (such as Selwyn, Henderson, Edwards), highly productive countries/regions (England, Australia, USA), key institutions (Monash University, Australian Catholic University), and crucial journals in the field ( Education and Information Technologies , Computers & Education , British Journal of Educational Technology ). Evolutionary analysis reveals four developmental periods in the research field of digital technology education application: the embryonic period, the preliminary development period, the key exploration, and the acceleration period of change. The study highlights the dual influence of technological factors and historical context on the research topic. Technology is a key factor in enabling education to transform and upgrade, and the context of the times is an important driving force in promoting the adoption of new technologies in the education system and the transformation and upgrading of education. Additionally, the study identifies three frontier hotspots in the field: physical education, digital transformation, and professional development under the promotion of digital technology. This study presents a clear framework for digital technology application in education, which can serve as a valuable reference for researchers and educational practitioners concerned with digital technology education application in theory and practice.

Similar content being viewed by others

conclusion about technology in education

A bibliometric analysis of knowledge mapping in Chinese education digitalization research from 2012 to 2022

conclusion about technology in education

Digital transformation and digital literacy in the context of complexity within higher education institutions: a systematic literature review

conclusion about technology in education

Education big data and learning analytics: a bibliometric analysis

Introduction.

Digital technology has become an essential component of modern education, facilitating the extension of temporal and spatial boundaries and enriching the pedagogical contexts (Selwyn and Facer, 2014 ). The advent of mobile communication technology has enabled learning through social media platforms (Szeto et al. 2015 ; Pires et al. 2022 ), while the advancement of augmented reality technology has disrupted traditional conceptions of learning environments and spaces (Perez-Sanagustin et al., 2014 ; Kyza and Georgiou, 2018 ). A wide range of digital technologies has enabled learning to become a norm in various settings, including the workplace (Sjöberg and Holmgren, 2021 ), home (Nazare et al. 2022 ), and online communities (Tang and Lam, 2014 ). Education is no longer limited to fixed locations and schedules, but has permeated all aspects of life, allowing learning to continue at any time and any place (Camilleri and Camilleri, 2016 ; Selwyn and Facer, 2014 ).

The advent of digital technology has led to the creation of several informal learning environments (Greenhow and Lewin, 2015 ) that exhibit divergent form, function, features, and patterns in comparison to conventional learning environments (Nygren et al. 2019 ). Consequently, the associated teaching and learning processes, as well as the strategies for the creation, dissemination, and acquisition of learning resources, have undergone a complete overhaul. The ensuing transformations have posed a myriad of novel issues, such as the optimal structuring of teaching methods by instructors and the adoption of appropriate learning strategies by students in the new digital technology environment. Consequently, an examination of the principles that underpin effective teaching and learning in this environment is a topic of significant interest to numerous scholars engaged in digital technology education research.

Over the course of the last two decades, digital technology has made significant strides in the field of education, notably in extending education time and space and creating novel educational contexts with sustainability. Despite research attempts to consolidate the application of digital technology in education, previous studies have only focused on specific aspects of digital technology, such as Pinto and Leite’s ( 2020 ) investigation into digital technology in higher education and Mustapha et al.’s ( 2021 ) examination of the role and value of digital technology in education during the pandemic. While these studies have provided valuable insights into the practical applications of digital technology in particular educational domains, they have not comprehensively explored the macro-mechanisms and internal logic of digital technology implementation in education. Additionally, these studies were conducted over a relatively brief period, making it challenging to gain a comprehensive understanding of the macro-dynamics and evolutionary process of digital technology in education. Some studies have provided an overview of digital education from an educational perspective but lack a precise understanding of technological advancement and change (Yang et al. 2022 ). Therefore, this study seeks to employ a systematic scientific approach to collate relevant research from 2000 to 2022, comprehend the internal logic and development trends of digital technology in education, and grasp the outstanding contribution of digital technology in promoting the sustainability of education in time and space. In summary, this study aims to address the following questions:

RQ1: Since the turn of the century, what is the productivity distribution of the field of digital technology education application research in terms of authorship, country/region, institutional and journal level?

RQ2: What is the development trend of research on the application of digital technology in education in the past two decades?

RQ3: What are the current frontiers of research on the application of digital technology in education?

Literature review

Although the term “digital technology” has become ubiquitous, a unified definition has yet to be agreed upon by scholars. Because the meaning of the word digital technology is closely related to the specific context. Within the educational research domain, Selwyn’s ( 2016 ) definition is widely favored by scholars (Pinto and Leite, 2020 ). Selwyn ( 2016 ) provides a comprehensive view of various concrete digital technologies and their applications in education through ten specific cases, such as immediate feedback in classes, orchestrating teaching, and community learning. Through these specific application scenarios, Selwyn ( 2016 ) argues that digital technology encompasses technologies associated with digital devices, including but not limited to tablets, smartphones, computers, and social media platforms (such as Facebook and YouTube). Furthermore, Further, the behavior of accessing the internet at any location through portable devices can be taken as an extension of the behavior of applying digital technology.

The evolving nature of digital technology has significant implications in the field of education. In the 1890s, the focus of digital technology in education was on comprehending the nuances of digital space, digital culture, and educational methodologies, with its connotations aligned more towards the idea of e-learning. The advent and subsequent widespread usage of mobile devices since the dawn of the new millennium have been instrumental in the rapid expansion of the concept of digital technology. Notably, mobile learning devices such as smartphones and tablets, along with social media platforms, have become integral components of digital technology (Conole and Alevizou, 2010 ; Batista et al. 2016 ). In recent times, the burgeoning application of AI technology in the education sector has played a vital role in enriching the digital technology lexicon (Banerjee et al. 2021 ). ChatGPT, for instance, is identified as a novel educational technology that has immense potential to revolutionize future education (Rospigliosi, 2023 ; Arif, Munaf and Ul-Haque, 2023 ).

Pinto and Leite ( 2020 ) conducted a comprehensive macroscopic survey of the use of digital technologies in the education sector and identified three distinct categories, namely technologies for assessment and feedback, mobile technologies, and Information Communication Technologies (ICT). This classification criterion is both macroscopic and highly condensed. In light of the established concept definitions of digital technology in the educational research literature, this study has adopted the characterizations of digital technology proposed by Selwyn ( 2016 ) and Pinto and Leite ( 2020 ) as crucial criteria for analysis and research inclusion. Specifically, this criterion encompasses several distinct types of digital technologies, including Information and Communication Technologies (ICT), Mobile tools, eXtended Reality (XR) Technologies, Assessment and Feedback systems, Learning Management Systems (LMS), Publish and Share tools, Collaborative systems, Social media, Interpersonal Communication tools, and Content Aggregation tools.

Methodology and materials

Research method: bibliometric.

The research on econometric properties has been present in various aspects of human production and life, yet systematic scientific theoretical guidance has been lacking, resulting in disorganization. In 1969, British scholar Pritchard ( 1969 ) proposed “bibliometrics,” which subsequently emerged as an independent discipline in scientific quantification research. Initially, Pritchard defined bibliometrics as “the application of mathematical and statistical methods to books and other media of communication,” however, the definition was not entirely rigorous. To remedy this, Hawkins ( 2001 ) expanded Pritchard’s definition to “the quantitative analysis of the bibliographic features of a body of literature.” De Bellis further clarified the objectives of bibliometrics, stating that it aims to analyze and identify patterns in literature, such as the most productive authors, institutions, countries, and journals in scientific disciplines, trends in literary production over time, and collaboration networks (De Bellis, 2009 ). According to Garfield ( 2006 ), bibliometric research enables the examination of the history and structure of a field, the flow of information within the field, the impact of journals, and the citation status of publications over a longer time scale. All of these definitions illustrate the unique role of bibliometrics as a research method for evaluating specific research fields.

This study uses CiteSpace, VOSviewer, and Charticulator to analyze data and create visualizations. Each of these three tools has its own strengths and can complement each other. CiteSpace and VOSviewer use set theory and probability theory to provide various visualization views in fields such as keywords, co-occurrence, and co-authors. They are easy to use and produce visually appealing graphics (Chen, 2006 ; van Eck and Waltman, 2009 ) and are currently the two most widely used bibliometric tools in the field of visualization (Pan et al. 2018 ). In this study, VOSviewer provided the data necessary for the Performance Analysis; Charticulator was then used to redraw using the tabular data exported from VOSviewer (for creating the chord diagram of country collaboration); this was to complement the mapping process, while CiteSpace was primarily utilized to generate keyword maps and conduct burst word analysis.

Data retrieval

This study selected documents from the Science Citation Index Expanded (SCIE) and Social Science Citation Index (SSCI) in the Web of Science Core Collection as the data source, for the following reasons:

(1) The Web of Science Core Collection, as a high-quality digital literature resource database, has been widely accepted by many researchers and is currently considered the most suitable database for bibliometric analysis (Jing et al. 2023a ). Compared to other databases, Web of Science provides more comprehensive data information (Chen et al. 2022a ), and also provides data formats suitable for analysis using VOSviewer and CiteSpace (Gaviria-Marin et al. 2019 ).

(2) The application of digital technology in the field of education is an interdisciplinary research topic, involving technical knowledge literature belonging to the natural sciences and education-related literature belonging to the social sciences. Therefore, it is necessary to select Science Citation Index Expanded (SCIE) and Social Science Citation Index (SSCI) as the sources of research data, ensuring the comprehensiveness of data while ensuring the reliability and persuasiveness of bibliometric research (Hwang and Tsai, 2011 ; Wang et al. 2022 ).

After establishing the source of research data, it is necessary to determine a retrieval strategy (Jing et al. 2023b ). The choice of a retrieval strategy should consider a balance between the breadth and precision of the search formula. That is to say, it should encompass all the literature pertaining to the research topic while excluding irrelevant documents as much as possible. In light of this, this study has set a retrieval strategy informed by multiple related papers (Mustapha et al. 2021 ; Luo et al. 2021 ). The research by Mustapha et al. ( 2021 ) guided us in selecting keywords (“digital” AND “technolog*”) to target digital technology, while Luo et al. ( 2021 ) informed the selection of terms (such as “instruct*,” “teach*,” and “education”) to establish links with the field of education. Then, based on the current application of digital technology in the educational domain and the scope of selection criteria, we constructed the final retrieval strategy. Following the general patterns of past research (Jing et al. 2023a , 2023b ), we conducted a specific screening using the topic search (Topics, TS) function in Web of Science. For the specific criteria used in the screening for this study, please refer to Table 1 .

Literature screening

Literature acquired through keyword searches may contain ostensibly related yet actually unrelated works. Therefore, to ensure the close relevance of literature included in the analysis to the research topic, it is often necessary to perform a manual screening process to identify the final literature to be analyzed, subsequent to completing the initial literature search.

The manual screening process consists of two steps. Initially, irrelevant literature is weeded out based on the title and abstract, with two members of the research team involved in this phase. This stage lasted about one week, resulting in 1106 articles being retained. Subsequently, a comprehensive review of the full text is conducted to accurately identify the literature required for the study. To carry out the second phase of manual screening effectively and scientifically, and to minimize the potential for researcher bias, the research team established the inclusion criteria presented in Table 2 . Three members were engaged in this phase, which took approximately 2 weeks, culminating in the retention of 588 articles after meticulous screening. The entire screening process is depicted in Fig. 1 , adhering to the PRISMA guidelines (Page et al. 2021 ).

figure 1

The process of obtaining and filtering the necessary literature data for research.

Data standardization

Nguyen and Hallinger ( 2020 ) pointed out that raw data extracted from scientific databases often contains multiple expressions of the same term, and not addressing these synonymous expressions could affect research results in bibliometric analysis. For instance, in the original data, the author list may include “Tsai, C. C.” and “Tsai, C.-C.”, while the keyword list may include “professional-development” and “professional development,” which often require merging. Therefore, before analyzing the selected literature, a data disambiguation process is necessary to standardize the data (Strotmann and Zhao, 2012 ; Van Eck and Waltman, 2019 ). This study adopted the data standardization process proposed by Taskin and Al ( 2019 ), mainly including the following standardization operations:

Firstly, the author and source fields in the data are corrected and standardized to differentiate authors with similar names.

Secondly, the study checks whether the journals to which the literature belongs have been renamed in the past over 20 years, so as to avoid the influence of periodical name change on the analysis results.

Finally, the keyword field is standardized by unifying parts of speech and singular/plural forms of keywords, which can help eliminate redundant entries in the knowledge graph.

Performance analysis (RQ1)

This section offers a thorough and detailed analysis of the state of research in the field of digital technology education. By utilizing descriptive statistics and visual maps, it provides a comprehensive overview of the development trends, authors, countries, institutions, and journal distribution within the field. The insights presented in this section are of great significance in advancing our understanding of the current state of research in this field and identifying areas for further investigation. The use of visual aids to display inter-country cooperation and the evolution of the field adds to the clarity and coherence of the analysis.

Time trend of the publications

To understand a research field, it is first necessary to understand the most basic quantitative information, among which the change in the number of publications per year best reflects the development trend of a research field. Figure 2 shows the distribution of publication dates.

figure 2

Time trend of the publications on application of digital technology in education.

From the Fig. 2 , it can be seen that the development of this field over the past over 20 years can be roughly divided into three stages. The first stage was from 2000 to 2007, during which the number of publications was relatively low. Due to various factors such as technological maturity, the academic community did not pay widespread attention to the role of digital technology in expanding the scope of teaching and learning. The second stage was from 2008 to 2019, during which the overall number of publications showed an upward trend, and the development of the field entered an accelerated period, attracting more and more scholars’ attention. The third stage was from 2020 to 2022, during which the number of publications stabilized at around 100. During this period, the impact of the pandemic led to a large number of scholars focusing on the role of digital technology in education during the pandemic, and research on the application of digital technology in education became a core topic in social science research.

Analysis of authors

An analysis of the author’s publication volume provides information about the representative scholars and core research strengths of a research area. Table 3 presents information on the core authors in adaptive learning research, including name, publication number, and average number of citations per article (based on the analysis and statistics from VOSviewer).

Variations in research foci among scholars abound. Within the field of digital technology education application research over the past two decades, Neil Selwyn stands as the most productive author, having published 15 papers garnering a total of 1027 citations, resulting in an average of 68.47 citations per paper. As a Professor at the Faculty of Education at Monash University, Selwyn concentrates on exploring the application of digital technology in higher education contexts (Selwyn et al. 2021 ), as well as related products in higher education such as Coursera, edX, and Udacity MOOC platforms (Bulfin et al. 2014 ). Selwyn’s contributions to the educational sociology perspective include extensive research on the impact of digital technology on education, highlighting the spatiotemporal extension of educational processes and practices through technological means as the greatest value of educational technology (Selwyn, 2012 ; Selwyn and Facer, 2014 ). In addition, he provides a blueprint for the development of future schools in 2030 based on the present impact of digital technology on education (Selwyn et al. 2019 ). The second most productive author in this field, Henderson, also offers significant contributions to the understanding of the important value of digital technology in education, specifically in the higher education setting, with a focus on the impact of the pandemic (Henderson et al. 2015 ; Cohen et al. 2022 ). In contrast, Edwards’ research interests focus on early childhood education, particularly the application of digital technology in this context (Edwards, 2013 ; Bird and Edwards, 2015 ). Additionally, on the technical level, Edwards also mainly prefers digital game technology, because it is a digital technology that children are relatively easy to accept (Edwards, 2015 ).

Analysis of countries/regions and organization

The present study aimed to ascertain the leading countries in digital technology education application research by analyzing 75 countries related to 558 works of literature. Table 4 depicts the top ten countries that have contributed significantly to this field in terms of publication count (based on the analysis and statistics from VOSviewer). Our analysis of Table 4 data shows that England emerged as the most influential country/region, with 92 published papers and 2401 citations. Australia and the United States secured the second and third ranks, respectively, with 90 papers (2187 citations) and 70 papers (1331 citations) published. Geographically, most of the countries featured in the top ten publication volumes are situated in Australia, North America, and Europe, with China being the only exception. Notably, all these countries, except China, belong to the group of developed nations, suggesting that economic strength is a prerequisite for fostering research in the digital technology education application field.

This study presents a visual representation of the publication output and cooperation relationships among different countries in the field of digital technology education application research. Specifically, a chord diagram is employed to display the top 30 countries in terms of publication output, as depicted in Fig. 3 . The chord diagram is composed of nodes and chords, where the nodes are positioned as scattered points along the circumference, and the length of each node corresponds to the publication output, with longer lengths indicating higher publication output. The chords, on the other hand, represent the cooperation relationships between any two countries, and are weighted based on the degree of closeness of the cooperation, with wider chords indicating closer cooperation. Through the analysis of the cooperation relationships, the findings suggest that the main publishing countries in this field are engaged in cooperative relationships with each other, indicating a relatively high level of international academic exchange and research internationalization.

figure 3

In the diagram, nodes are scattered along the circumference of a circle, with the length of each node representing the volume of publications. The weighted arcs connecting any two points on the circle are known as chords, representing the collaborative relationship between the two, with the width of the arc indicating the closeness of the collaboration.

Further analyzing Fig. 3 , we can extract more valuable information, enabling a deeper understanding of the connections between countries in the research field of digital technology in educational applications. It is evident that certain countries, such as the United States, China, and England, display thicker connections, indicating robust collaborative relationships in terms of productivity. These thicker lines signify substantial mutual contributions and shared objectives in certain sectors or fields, highlighting the interconnectedness and global integration in these areas. By delving deeper, we can also explore potential future collaboration opportunities through the chord diagram, identifying possible partners to propel research and development in this field. In essence, the chord diagram successfully encapsulates and conveys the multi-dimensionality of global productivity and cooperation, allowing for a comprehensive understanding of the intricate inter-country relationships and networks in a global context, providing valuable guidance and insights for future research and collaborations.

An in-depth examination of the publishing institutions is provided in Table 5 , showcasing the foremost 10 institutions ranked by their publication volume. Notably, Monash University and Australian Catholic University, situated in Australia, have recorded the most prolific publications within the digital technology education application realm, with 22 and 10 publications respectively. Moreover, the University of Oslo from Norway is featured among the top 10 publishing institutions, with an impressive average citation count of 64 per publication. It is worth highlighting that six institutions based in the United Kingdom were also ranked within the top 10 publishing institutions, signifying their leading position in this area of research.

Analysis of journals

Journals are the main carriers for publishing high-quality papers. Some scholars point out that the two key factors to measure the influence of journals in the specified field are the number of articles published and the number of citations. The more papers published in a magazine and the more citations, the greater its influence (Dzikowski, 2018 ). Therefore, this study utilized VOSviewer to statistically analyze the top 10 journals with the most publications in the field of digital technology in education and calculated the average citations per article (see Table 6 ).

Based on Table 6 , it is apparent that the highest number of articles in the domain of digital technology in education research were published in Education and Information Technologies (47 articles), Computers & Education (34 articles), and British Journal of Educational Technology (32 articles), indicating a higher article output compared to other journals. This underscores the fact that these three journals concentrate more on the application of digital technology in education. Furthermore, several other journals, such as Technology Pedagogy and Education and Sustainability, have published more than 15 articles in this domain. Sustainability represents the open access movement, which has notably facilitated research progress in this field, indicating that the development of open access journals in recent years has had a significant impact. Although there is still considerable disagreement among scholars on the optimal approach to achieve open access, the notion that research outcomes should be accessible to all is widely recognized (Huang et al. 2020 ). On further analysis of the research fields to which these journals belong, except for Sustainability, it is evident that they all pertain to educational technology, thus providing a qualitative definition of the research area of digital technology education from the perspective of journals.

Temporal keyword analysis: thematic evolution (RQ2)

The evolution of research themes is a dynamic process, and previous studies have attempted to present the developmental trajectory of fields by drawing keyword networks in phases (Kumar et al. 2021 ; Chen et al. 2022b ). To understand the shifts in research topics across different periods, this study follows past research and, based on the significant changes in the research field and corresponding technological advancements during the outlined periods, divides the timeline into four stages (the first stage from January 2000 to December 2005, the second stage from January 2006 to December 2011, the third stage from January 2012 to December 2017; and the fourth stage from January 2018 to December 2022). The division into these four stages was determined through a combination of bibliometric analysis and literature review, which presented a clear trajectory of the field’s development. The research analyzes the keyword networks for each time period (as there are only three articles in the first stage, it was not possible to generate an appropriate keyword co-occurrence map, hence only the keyword co-occurrence maps from the second to the fourth stages are provided), to understand the evolutionary track of the digital technology education application research field over time.

2000.1–2005.12: germination period

From January 2000 to December 2005, digital technology education application research was in its infancy. Only three studies focused on digital technology, all of which were related to computers. Due to the popularity of computers, the home became a new learning environment, highlighting the important role of digital technology in expanding the scope of learning spaces (Sutherland et al. 2000 ). In specific disciplines and contexts, digital technology was first favored in medical clinical practice, becoming an important tool for supporting the learning of clinical knowledge and practice (Tegtmeyer et al. 2001 ; Durfee et al. 2003 ).

2006.1–2011.12: initial development period

Between January 2006 and December 2011, it was the initial development period of digital technology education research. Significant growth was observed in research related to digital technology, and discussions and theoretical analyses about “digital natives” emerged. During this phase, scholars focused on the debate about “how to use digital technology reasonably” and “whether current educational models and school curriculum design need to be adjusted on a large scale” (Bennett and Maton, 2010 ; Selwyn, 2009 ; Margaryan et al. 2011 ). These theoretical and speculative arguments provided a unique perspective on the impact of cognitive digital technology on education and teaching. As can be seen from the vocabulary such as “rethinking”, “disruptive pedagogy”, and “attitude” in Fig. 4 , many scholars joined the calm reflection and analysis under the trend of digital technology (Laurillard, 2008 ; Vratulis et al. 2011 ). During this phase, technology was still undergoing dramatic changes. The development of mobile technology had already caught the attention of many scholars (Wong et al. 2011 ), but digital technology represented by computers was still very active (Selwyn et al. 2011 ). The change in technological form would inevitably lead to educational transformation. Collins and Halverson ( 2010 ) summarized the prospects and challenges of using digital technology for learning and educational practices, believing that digital technology would bring a disruptive revolution to the education field and bring about a new educational system. In addition, the term “teacher education” in Fig. 4 reflects the impact of digital technology development on teachers. The rapid development of technology has widened the generation gap between teachers and students. To ensure smooth communication between teachers and students, teachers must keep up with the trend of technological development and establish a lifelong learning concept (Donnison, 2009 ).

figure 4

In the diagram, each node represents a keyword, with the size of the node indicating the frequency of occurrence of the keyword. The connections represent the co-occurrence relationships between keywords, with a higher frequency of co-occurrence resulting in tighter connections.

2012.1–2017.12: critical exploration period

During the period spanning January 2012 to December 2017, the application of digital technology in education research underwent a significant exploration phase. As can be seen from Fig. 5 , different from the previous stage, the specific elements of specific digital technology have started to increase significantly, including the enrichment of technological contexts, the greater variety of research methods, and the diversification of learning modes. Moreover, the temporal and spatial dimensions of the learning environment were further de-emphasized, as noted in previous literature (Za et al. 2014 ). Given the rapidly accelerating pace of technological development, the education system in the digital era is in urgent need of collaborative evolution and reconstruction, as argued by Davis, Eickelmann, and Zaka ( 2013 ).

figure 5

In the domain of digital technology, social media has garnered substantial scholarly attention as a promising avenue for learning, as noted by Pasquini and Evangelopoulos ( 2016 ). The implementation of social media in education presents several benefits, including the liberation of education from the restrictions of physical distance and time, as well as the erasure of conventional educational boundaries. The user-generated content (UGC) model in social media has emerged as a crucial source for knowledge creation and distribution, with the widespread adoption of mobile devices. Moreover, social networks have become an integral component of ubiquitous learning environments (Hwang et al. 2013 ). The utilization of social media allows individuals to function as both knowledge producers and recipients, which leads to a blurring of the conventional roles of learners and teachers. On mobile platforms, the roles of learners and teachers are not fixed, but instead interchangeable.

In terms of research methodology, the prevalence of empirical studies with survey designs in the field of educational technology during this period is evident from the vocabulary used, such as “achievement,” “acceptance,” “attitude,” and “ict.” in Fig. 5 . These studies aim to understand learners’ willingness to adopt and attitudes towards new technologies, and some seek to investigate the impact of digital technologies on learning outcomes through quasi-experimental designs (Domínguez et al. 2013 ). Among these empirical studies, mobile learning emerged as a hot topic, and this is not surprising. First, the advantages of mobile learning environments over traditional ones have been empirically demonstrated (Hwang et al. 2013 ). Second, learners born around the turn of the century have been heavily influenced by digital technologies and have developed their own learning styles that are more open to mobile devices as a means of learning. Consequently, analyzing mobile learning as a relatively novel mode of learning has become an important issue for scholars in the field of educational technology.

The intervention of technology has led to the emergence of several novel learning modes, with the blended learning model being the most representative one in the current phase. Blended learning, a novel concept introduced in the information age, emphasizes the integration of the benefits of traditional learning methods and online learning. This learning mode not only highlights the prominent role of teachers in guiding, inspiring, and monitoring the learning process but also underlines the importance of learners’ initiative, enthusiasm, and creativity in the learning process. Despite being an early conceptualization, blended learning’s meaning has been expanded by the widespread use of mobile technology and social media in education. The implementation of new technologies, particularly mobile devices, has resulted in the transformation of curriculum design and increased flexibility and autonomy in students’ learning processes (Trujillo Maza et al. 2016 ), rekindling scholarly attention to this learning mode. However, some scholars have raised concerns about the potential drawbacks of the blended learning model, such as its significant impact on the traditional teaching system, the lack of systematic coping strategies and relevant policies in several schools and regions (Moskal et al. 2013 ).

2018.1–2022.12: accelerated transformation period

The period spanning from January 2018 to December 2022 witnessed a rapid transformation in the application of digital technology in education research. The field of digital technology education research reached a peak period of publication, largely influenced by factors such as the COVID-19 pandemic (Yu et al. 2023 ). Research during this period was built upon the achievements, attitudes, and social media of the previous phase, and included more elements that reflect the characteristics of this research field, such as digital literacy, digital competence, and professional development, as depicted in Fig. 6 . Alongside this, scholars’ expectations for the value of digital technology have expanded, and the pursuit of improving learning efficiency and performance is no longer the sole focus. Some research now aims to cultivate learners’ motivation and enhance their self-efficacy by applying digital technology in a reasonable manner, as demonstrated by recent studies (Beardsley et al. 2021 ; Creely et al. 2021 ).

figure 6

The COVID-19 pandemic has emerged as a crucial backdrop for the digital technology’s role in sustaining global education, as highlighted by recent scholarly research (Zhou et al. 2022 ; Pan and Zhang, 2020 ; Mo et al. 2022 ). The online learning environment, which is supported by digital technology, has become the primary battleground for global education (Yu, 2022 ). This social context has led to various studies being conducted, with some scholars positing that the pandemic has impacted the traditional teaching order while also expanding learning possibilities in terms of patterns and forms (Alabdulaziz, 2021 ). Furthermore, the pandemic has acted as a catalyst for teacher teaching and technological innovation, and this viewpoint has been empirically substantiated (Moorhouse and Wong, 2021 ). Additionally, some scholars believe that the pandemic’s push is a crucial driving force for the digital transformation of the education system, serving as an essential mechanism for overcoming the system’s inertia (Romero et al. 2021 ).

The rapid outbreak of the pandemic posed a challenge to the large-scale implementation of digital technologies, which was influenced by a complex interplay of subjective and objective factors. Objective constraints included the lack of infrastructure in some regions to support digital technologies, while subjective obstacles included psychological resistance among certain students and teachers (Moorhouse, 2021 ). These factors greatly impacted the progress of online learning during the pandemic. Additionally, Timotheou et al. ( 2023 ) conducted a comprehensive systematic review of existing research on digital technology use during the pandemic, highlighting the critical role played by various factors such as learners’ and teachers’ digital skills, teachers’ personal attributes and professional development, school leadership and management, and administration in facilitating the digitalization and transformation of schools.

The current stage of research is characterized by the pivotal term “digital literacy,” denoting a growing interest in learners’ attitudes and adoption of emerging technologies. Initially, the term “literacy” was restricted to fundamental abilities and knowledge associated with books and print materials (McMillan, 1996 ). However, with the swift advancement of computers and digital technology, there have been various attempts to broaden the scope of literacy beyond its traditional meaning, including game literacy (Buckingham and Burn, 2007 ), information literacy (Eisenberg, 2008 ), and media literacy (Turin and Friesem, 2020 ). Similarly, digital literacy has emerged as a crucial concept, and Gilster and Glister ( 1997 ) were the first to introduce this concept, referring to the proficiency in utilizing technology and processing digital information in academic, professional, and daily life settings. In practical educational settings, learners who possess higher digital literacy often exhibit an aptitude for quickly mastering digital devices and applying them intelligently to education and teaching (Yu, 2022 ).

The utilization of digital technology in education has undergone significant changes over the past two decades, and has been a crucial driver of educational reform with each new technological revolution. The impact of these changes on the underlying logic of digital technology education applications has been noticeable. From computer technology to more recent developments such as virtual reality (VR), augmented reality (AR), and artificial intelligence (AI), the acceleration in digital technology development has been ongoing. Educational reforms spurred by digital technology development continue to be dynamic, as each new digital innovation presents new possibilities and models for teaching practice. This is especially relevant in the post-pandemic era, where the importance of technological progress in supporting teaching cannot be overstated (Mughal et al. 2022 ). Existing digital technologies have already greatly expanded the dimensions of education in both time and space, while future digital technologies aim to expand learners’ perceptions. Researchers have highlighted the potential of integrated technology and immersive technology in the development of the educational metaverse, which is highly anticipated to create a new dimension for the teaching and learning environment, foster a new value system for the discipline of educational technology, and more effectively and efficiently achieve the grand educational blueprint of the United Nations’ Sustainable Development Goals (Zhang et al. 2022 ; Li and Yu, 2023 ).

Hotspot evolution analysis (RQ3)

The examination of keyword evolution reveals a consistent trend in the advancement of digital technology education application research. The emergence and transformation of keywords serve as indicators of the varying research interests in this field. Thus, the utilization of the burst detection function available in CiteSpace allowed for the identification of the top 10 burst words that exhibited a high level of burst strength. This outcome is illustrated in Table 7 .

According to the results presented in Table 7 , the explosive terminology within the realm of digital technology education research has exhibited a concentration mainly between the years 2018 and 2022. Prior to this time frame, the emerging keywords were limited to “information technology” and “computer”. Notably, among them, computer, as an emergent keyword, has always had a high explosive intensity from 2008 to 2018, which reflects the important position of computer in digital technology and is the main carrier of many digital technologies such as Learning Management Systems (LMS) and Assessment and Feedback systems (Barlovits et al. 2022 ).

Since 2018, an increasing number of research studies have focused on evaluating the capabilities of learners to accept, apply, and comprehend digital technologies. As indicated by the use of terms such as “digital literacy” and “digital skill,” the assessment of learners’ digital literacy has become a critical task. Scholarly efforts have been directed towards the development of literacy assessment tools and the implementation of empirical assessments. Furthermore, enhancing the digital literacy of both learners and educators has garnered significant attention. (Nagle, 2018 ; Yu, 2022 ). Simultaneously, given the widespread use of various digital technologies in different formal and informal learning settings, promoting learners’ digital skills has become a crucial objective for contemporary schools (Nygren et al. 2019 ; Forde and OBrien, 2022 ).

Since 2020, the field of applied research on digital technology education has witnessed the emergence of three new hotspots, all of which have been affected to some extent by the pandemic. Firstly, digital technology has been widely applied in physical education, which is one of the subjects that has been severely affected by the pandemic (Parris et al. 2022 ; Jiang and Ning, 2022 ). Secondly, digital transformation has become an important measure for most schools, especially higher education institutions, to cope with the impact of the pandemic globally (García-Morales et al. 2021 ). Although the concept of digital transformation was proposed earlier, the COVID-19 pandemic has greatly accelerated this transformation process. Educational institutions must carefully redesign their educational products to face this new situation, providing timely digital learning methods, environments, tools, and support systems that have far-reaching impacts on modern society (Krishnamurthy, 2020 ; Salas-Pilco et al. 2022 ). Moreover, the professional development of teachers has become a key mission of educational institutions in the post-pandemic era. Teachers need to have a certain level of digital literacy and be familiar with the tools and online teaching resources used in online teaching, which has become a research hotspot today. Organizing digital skills training for teachers to cope with the application of emerging technologies in education is an important issue for teacher professional development and lifelong learning (Garzón-Artacho et al. 2021 ). As the main organizers and practitioners of emergency remote teaching (ERT) during the pandemic, teachers must put cognitive effort into their professional development to ensure effective implementation of ERT (Romero-Hall and Jaramillo Cherrez, 2022 ).

The burst word “digital transformation” reveals that we are in the midst of an ongoing digital technology revolution. With the emergence of innovative digital technologies such as ChatGPT and Microsoft 365 Copilot, technology trends will continue to evolve, albeit unpredictably. While the impact of these advancements on school education remains uncertain, it is anticipated that the widespread integration of technology will significantly affect the current education system. Rejecting emerging technologies without careful consideration is unwise. Like any revolution, the technological revolution in the education field has both positive and negative aspects. Detractors argue that digital technology disrupts learning and memory (Baron, 2021 ) or causes learners to become addicted and distracted from learning (Selwyn and Aagaard, 2020 ). On the other hand, the prudent use of digital technology in education offers a glimpse of a golden age of open learning. Educational leaders and practitioners have the opportunity to leverage cutting-edge digital technologies to address current educational challenges and develop a rational path for the sustainable and healthy growth of education.

Discussion on performance analysis (RQ1)

The field of digital technology education application research has experienced substantial growth since the turn of the century, a phenomenon that is quantifiably apparent through an analysis of authorship, country/region contributions, and institutional engagement. This expansion reflects the increased integration of digital technologies in educational settings and the heightened scholarly interest in understanding and optimizing their use.

Discussion on authorship productivity in digital technology education research

The authorship distribution within digital technology education research is indicative of the field’s intellectual structure and depth. A primary figure in this domain is Neil Selwyn, whose substantial citation rate underscores the profound impact of his work. His focus on the implications of digital technology in higher education and educational sociology has proven to be seminal. Selwyn’s research trajectory, especially the exploration of spatiotemporal extensions of education through technology, provides valuable insights into the multifaceted role of digital tools in learning processes (Selwyn et al. 2019 ).

Other notable contributors, like Henderson and Edwards, present diversified research interests, such as the impact of digital technologies during the pandemic and their application in early childhood education, respectively. Their varied focuses highlight the breadth of digital technology education research, encompassing pedagogical innovation, technological adaptation, and policy development.

Discussion on country/region-level productivity and collaboration

At the country/region level, the United Kingdom, specifically England, emerges as a leading contributor with 92 published papers and a significant citation count. This is closely followed by Australia and the United States, indicating a strong English-speaking research axis. Such geographical concentration of scholarly output often correlates with investment in research and development, technological infrastructure, and the prevalence of higher education institutions engaging in cutting-edge research.

China’s notable inclusion as the only non-Western country among the top contributors to the field suggests a growing research capacity and interest in digital technology in education. However, the lower average citation per paper for China could reflect emerging engagement or different research focuses that may not yet have achieved the same international recognition as Western counterparts.

The chord diagram analysis furthers this understanding, revealing dense interconnections between countries like the United States, China, and England, which indicates robust collaborations. Such collaborations are fundamental in addressing global educational challenges and shaping international research agendas.

Discussion on institutional-level contributions to digital technology education

Institutional productivity in digital technology education research reveals a constellation of universities driving the field forward. Monash University and the Australian Catholic University have the highest publication output, signaling Australia’s significant role in advancing digital education research. The University of Oslo’s remarkable average citation count per publication indicates influential research contributions, potentially reflecting high-quality studies that resonate with the broader academic community.

The strong showing of UK institutions, including the University of London, The Open University, and the University of Cambridge, reinforces the UK’s prominence in this research field. Such institutions are often at the forefront of pedagogical innovation, benefiting from established research cultures and funding mechanisms that support sustained inquiry into digital education.

Discussion on journal publication analysis

An examination of journal outputs offers a lens into the communicative channels of the field’s knowledge base. Journals such as Education and Information Technologies , Computers & Education , and the British Journal of Educational Technology not only serve as the primary disseminators of research findings but also as indicators of research quality and relevance. The impact factor (IF) serves as a proxy for the quality and influence of these journals within the academic community.

The high citation counts for articles published in Computers & Education suggest that research disseminated through this medium has a wide-reaching impact and is of particular interest to the field. This is further evidenced by its significant IF of 11.182, indicating that the journal is a pivotal platform for seminal work in the application of digital technology in education.

The authorship, regional, and institutional productivity in the field of digital technology education application research collectively narrate the evolution of this domain since the turn of the century. The prominence of certain authors and countries underscores the importance of socioeconomic factors and existing academic infrastructure in fostering research productivity. Meanwhile, the centrality of specific journals as outlets for high-impact research emphasizes the role of academic publishing in shaping the research landscape.

As the field continues to grow, future research may benefit from leveraging the collaborative networks that have been elucidated through this analysis, perhaps focusing on underrepresented regions to broaden the scope and diversity of research. Furthermore, the stabilization of publication numbers in recent years invites a deeper exploration into potential plateaus in research trends or saturation in certain sub-fields, signaling an opportunity for novel inquiries and methodological innovations.

Discussion on the evolutionary trends (RQ2)

The evolution of the research field concerning the application of digital technology in education over the past two decades is a story of convergence, diversification, and transformation, shaped by rapid technological advancements and shifting educational paradigms.

At the turn of the century, the inception of digital technology in education was largely exploratory, with a focus on how emerging computer technologies could be harnessed to enhance traditional learning environments. Research from this early period was primarily descriptive, reflecting on the potential and challenges of incorporating digital tools into the educational setting. This phase was critical in establishing the fundamental discourse that would guide subsequent research, as it set the stage for understanding the scope and impact of digital technology in learning spaces (Wang et al. 2023 ).

As the first decade progressed, the narrative expanded to encompass the pedagogical implications of digital technologies. This was a period of conceptual debates, where terms like “digital natives” and “disruptive pedagogy” entered the academic lexicon, underscoring the growing acknowledgment of digital technology as a transformative force within education (Bennett and Maton, 2010 ). During this time, the research began to reflect a more nuanced understanding of the integration of technology, considering not only its potential to change where and how learning occurred but also its implications for educational equity and access.

In the second decade, with the maturation of internet connectivity and mobile technology, the focus of research shifted from theoretical speculations to empirical investigations. The proliferation of digital devices and the ubiquity of social media influenced how learners interacted with information and each other, prompting a surge in studies that sought to measure the impact of these tools on learning outcomes. The digital divide and issues related to digital literacy became central concerns, as scholars explored the varying capacities of students and educators to engage with technology effectively.

Throughout this period, there was an increasing emphasis on the individualization of learning experiences, facilitated by adaptive technologies that could cater to the unique needs and pacing of learners (Jing et al. 2023a ). This individualization was coupled with a growing recognition of the importance of collaborative learning, both online and offline, and the role of digital tools in supporting these processes. Blended learning models, which combined face-to-face instruction with online resources, emerged as a significant trend, advocating for a balance between traditional pedagogies and innovative digital strategies.

The later years, particularly marked by the COVID-19 pandemic, accelerated the necessity for digital technology in education, transforming it from a supplementary tool to an essential platform for delivering education globally (Mo et al. 2022 ; Mustapha et al. 2021 ). This era brought about an unprecedented focus on online learning environments, distance education, and virtual classrooms. Research became more granular, examining not just the pedagogical effectiveness of digital tools, but also their role in maintaining continuity of education during crises, their impact on teacher and student well-being, and their implications for the future of educational policy and infrastructure.

Across these two decades, the research field has seen a shift from examining digital technology as an external addition to the educational process, to viewing it as an integral component of curriculum design, instructional strategies, and even assessment methods. The emergent themes have broadened from a narrow focus on specific tools or platforms to include wider considerations such as data privacy, ethical use of technology, and the environmental impact of digital tools.

Moreover, the field has moved from considering the application of digital technology in education as a primarily cognitive endeavor to recognizing its role in facilitating socio-emotional learning, digital citizenship, and global competencies. Researchers have increasingly turned their attention to the ways in which technology can support collaborative skills, cultural understanding, and ethical reasoning within diverse student populations.

In summary, the past over twenty years in the research field of digital technology applications in education have been characterized by a progression from foundational inquiries to complex analyses of digital integration. This evolution has mirrored the trajectory of technology itself, from a facilitative tool to a pervasive ecosystem defining contemporary educational experiences. As we look to the future, the field is poised to delve into the implications of emerging technologies like AI, AR, and VR, and their potential to redefine the educational landscape even further. This ongoing metamorphosis suggests that the application of digital technology in education will continue to be a rich area of inquiry, demanding continual adaptation and forward-thinking from educators and researchers alike.

Discussion on the study of research hotspots (RQ3)

The analysis of keyword evolution in digital technology education application research elucidates the current frontiers in the field, reflecting a trajectory that is in tandem with the rapidly advancing digital age. This landscape is sculpted by emergent technological innovations and shaped by the demands of an increasingly digital society.

Interdisciplinary integration and pedagogical transformation

One of the frontiers identified from recent keyword bursts includes the integration of digital technology into diverse educational contexts, particularly noted with the keyword “physical education.” The digitalization of disciplines traditionally characterized by physical presence illustrates the pervasive reach of technology and signifies a push towards interdisciplinary integration where technology is not only a facilitator but also a transformative agent. This integration challenges educators to reconceptualize curriculum delivery to accommodate digital tools that can enhance or simulate the physical aspects of learning.

Digital literacy and skills acquisition

Another pivotal frontier is the focus on “digital literacy” and “digital skill”, which has intensified in recent years. This suggests a shift from mere access to technology towards a comprehensive understanding and utilization of digital tools. In this realm, the emphasis is not only on the ability to use technology but also on critical thinking, problem-solving, and the ethical use of digital resources (Yu, 2022 ). The acquisition of digital literacy is no longer an additive skill but a fundamental aspect of modern education, essential for navigating and contributing to the digital world.

Educational digital transformation

The keyword “digital transformation” marks a significant research frontier, emphasizing the systemic changes that education institutions must undergo to align with the digital era (Romero et al. 2021 ). This transformation includes the redesigning of learning environments, pedagogical strategies, and assessment methods to harness digital technology’s full potential. Research in this area explores the complexity of institutional change, addressing the infrastructural, cultural, and policy adjustments needed for a seamless digital transition.

Engagement and participation

Further exploration into “engagement” and “participation” underscores the importance of student-centered learning environments that are mediated by technology. The current frontiers examine how digital platforms can foster collaboration, inclusivity, and active learning, potentially leading to more meaningful and personalized educational experiences. Here, the use of technology seeks to support the emotional and cognitive aspects of learning, moving beyond the transactional view of education to one that is relational and interactive.

Professional development and teacher readiness

As the field evolves, “professional development” emerges as a crucial area, particularly in light of the pandemic which necessitated emergency remote teaching. The need for teacher readiness in a digital age is a pressing frontier, with research focusing on the competencies required for educators to effectively integrate technology into their teaching practices. This includes familiarity with digital tools, pedagogical innovation, and an ongoing commitment to personal and professional growth in the digital domain.

Pandemic as a catalyst

The recent pandemic has acted as a catalyst for accelerated research and application in this field, particularly in the domains of “digital transformation,” “professional development,” and “physical education.” This period has been a litmus test for the resilience and adaptability of educational systems to continue their operations in an emergency. Research has thus been directed at understanding how digital technologies can support not only continuity but also enhance the quality and reach of education in such contexts.

Ethical and societal considerations

The frontier of digital technology in education is also expanding to consider broader ethical and societal implications. This includes issues of digital equity, data privacy, and the sociocultural impact of technology on learning communities. The research explores how educational technology can be leveraged to address inequities and create more equitable learning opportunities for all students, regardless of their socioeconomic background.

Innovation and emerging technologies

Looking forward, the frontiers are set to be influenced by ongoing and future technological innovations, such as artificial intelligence (AI) (Wu and Yu, 2023 ; Chen et al. 2022a ). The exploration into how these technologies can be integrated into educational practices to create immersive and adaptive learning experiences represents a bold new chapter for the field.

In conclusion, the current frontiers of research on the application of digital technology in education are multifaceted and dynamic. They reflect an overarching movement towards deeper integration of technology in educational systems and pedagogical practices, where the goals are not only to facilitate learning but to redefine it. As these frontiers continue to expand and evolve, they will shape the educational landscape, requiring a concerted effort from researchers, educators, policymakers, and technologists to navigate the challenges and harness the opportunities presented by the digital revolution in education.

Conclusions and future research

Conclusions.

The utilization of digital technology in education is a research area that cuts across multiple technical and educational domains and continues to experience dynamic growth due to the continuous progress of technology. In this study, a systematic review of this field was conducted through bibliometric techniques to examine its development trajectory. The primary focus of the review was to investigate the leading contributors, productive national institutions, significant publications, and evolving development patterns. The study’s quantitative analysis resulted in several key conclusions that shed light on this research field’s current state and future prospects.

(1) The research field of digital technology education applications has entered a stage of rapid development, particularly in recent years due to the impact of the pandemic, resulting in a peak of publications. Within this field, several key authors (Selwyn, Henderson, Edwards, etc.) and countries/regions (England, Australia, USA, etc.) have emerged, who have made significant contributions. International exchanges in this field have become frequent, with a high degree of internationalization in academic research. Higher education institutions in the UK and Australia are the core productive forces in this field at the institutional level.

(2) Education and Information Technologies , Computers & Education , and the British Journal of Educational Technology are notable journals that publish research related to digital technology education applications. These journals are affiliated with the research field of educational technology and provide effective communication platforms for sharing digital technology education applications.

(3) Over the past two decades, research on digital technology education applications has progressed from its early stages of budding, initial development, and critical exploration to accelerated transformation, and it is currently approaching maturity. Technological progress and changes in the times have been key driving forces for educational transformation and innovation, and both have played important roles in promoting the continuous development of education.

(4) Influenced by the pandemic, three emerging frontiers have emerged in current research on digital technology education applications, which are physical education, digital transformation, and professional development under the promotion of digital technology. These frontier research hotspots reflect the core issues that the education system faces when encountering new technologies. The evolution of research hotspots shows that technology breakthroughs in education’s original boundaries of time and space create new challenges. The continuous self-renewal of education is achieved by solving one hotspot problem after another.

The present study offers significant practical implications for scholars and practitioners in the field of digital technology education applications. Firstly, it presents a well-defined framework of the existing research in this area, serving as a comprehensive guide for new entrants to the field and shedding light on the developmental trajectory of this research domain. Secondly, the study identifies several contemporary research hotspots, thus offering a valuable decision-making resource for scholars aiming to explore potential research directions. Thirdly, the study undertakes an exhaustive analysis of published literature to identify core journals in the field of digital technology education applications, with Sustainability being identified as a promising open access journal that publishes extensively on this topic. This finding can potentially facilitate scholars in selecting appropriate journals for their research outputs.

Limitation and future research

Influenced by some objective factors, this study also has some limitations. First of all, the bibliometrics analysis software has high standards for data. In order to ensure the quality and integrity of the collected data, the research only selects the periodical papers in SCIE and SSCI indexes, which are the core collection of Web of Science database, and excludes other databases, conference papers, editorials and other publications, which may ignore some scientific research and original opinions in the field of digital technology education and application research. In addition, although this study used professional software to carry out bibliometric analysis and obtained more objective quantitative data, the analysis and interpretation of data will inevitably have a certain subjective color, and the influence of subjectivity on data analysis cannot be completely avoided. As such, future research endeavors will broaden the scope of literature screening and proactively engage scholars in the field to gain objective and state-of-the-art insights, while minimizing the adverse impact of personal subjectivity on research analysis.

Data availability

The datasets analyzed during the current study are available in the Dataverse repository: https://doi.org/10.7910/DVN/F9QMHY

Alabdulaziz MS (2021) COVID-19 and the use of digital technology in mathematics education. Educ Inf Technol 26(6):7609–7633. https://doi.org/10.1007/s10639-021-10602-3

Arif TB, Munaf U, Ul-Haque I (2023) The future of medical education and research: is ChatGPT a blessing or blight in disguise? Med Educ Online 28. https://doi.org/10.1080/10872981.2023.2181052

Banerjee M, Chiew D, Patel KT, Johns I, Chappell D, Linton N, Cole GD, Francis DP, Szram J, Ross J, Zaman S (2021) The impact of artificial intelligence on clinical education: perceptions of postgraduate trainee doctors in London (UK) and recommendations for trainers. BMC Med Educ 21. https://doi.org/10.1186/s12909-021-02870-x

Barlovits S, Caldeira A, Fesakis G, Jablonski S, Koutsomanoli Filippaki D, Lázaro C, Ludwig M, Mammana MF, Moura A, Oehler DXK, Recio T, Taranto E, Volika S(2022) Adaptive, synchronous, and mobile online education: developing the ASYMPTOTE learning environment. Mathematics 10:1628. https://doi.org/10.3390/math10101628

Article   Google Scholar  

Baron NS(2021) Know what? How digital technologies undermine learning and remembering J Pragmat 175:27–37. https://doi.org/10.1016/j.pragma.2021.01.011

Batista J, Morais NS, Ramos F (2016) Researching the use of communication technologies in higher education institutions in Portugal. https://doi.org/10.4018/978-1-5225-0571-6.ch057

Beardsley M, Albó L, Aragón P, Hernández-Leo D (2021) Emergency education effects on teacher abilities and motivation to use digital technologies. Br J Educ Technol 52. https://doi.org/10.1111/bjet.13101

Bennett S, Maton K(2010) Beyond the “digital natives” debate: towards a more nuanced understanding of students’ technology experiences J Comput Assist Learn 26:321–331. https://doi.org/10.1111/j.1365-2729.2010.00360.x

Buckingham D, Burn A (2007) Game literacy in theory and practice 16:323–349

Google Scholar  

Bulfin S, Pangrazio L, Selwyn N (2014) Making “MOOCs”: the construction of a new digital higher education within news media discourse. In: The International Review of Research in Open and Distributed Learning 15. https://doi.org/10.19173/irrodl.v15i5.1856

Camilleri MA, Camilleri AC(2016) Digital learning resources and ubiquitous technologies in education Technol Knowl Learn 22:65–82. https://doi.org/10.1007/s10758-016-9287-7

Chen C(2006) CiteSpace II: detecting and visualizing emerging trends and transient patterns in scientific literature J Am Soc Inf Sci Technol 57:359–377. https://doi.org/10.1002/asi.20317

Chen J, Dai J, Zhu K, Xu L(2022) Effects of extended reality on language learning: a meta-analysis Front Psychol 13:1016519. https://doi.org/10.3389/fpsyg.2022.1016519

Article   PubMed   PubMed Central   Google Scholar  

Chen J, Wang CL, Tang Y (2022b) Knowledge mapping of volunteer motivation: a bibliometric analysis and cross-cultural comparative study. Front Psychol 13. https://doi.org/10.3389/fpsyg.2022.883150

Cohen A, Soffer T, Henderson M(2022) Students’ use of technology and their perceptions of its usefulness in higher education: International comparison J Comput Assist Learn 38(5):1321–1331. https://doi.org/10.1111/jcal.12678

Collins A, Halverson R(2010) The second educational revolution: rethinking education in the age of technology J Comput Assist Learn 26:18–27. https://doi.org/10.1111/j.1365-2729.2009.00339.x

Conole G, Alevizou P (2010) A literature review of the use of Web 2.0 tools in higher education. Walton Hall, Milton Keynes, UK: the Open University, retrieved 17 February

Creely E, Henriksen D, Crawford R, Henderson M(2021) Exploring creative risk-taking and productive failure in classroom practice. A case study of the perceived self-efficacy and agency of teachers at one school Think Ski Creat 42:100951. https://doi.org/10.1016/j.tsc.2021.100951

Davis N, Eickelmann B, Zaka P(2013) Restructuring of educational systems in the digital age from a co-evolutionary perspective J Comput Assist Learn 29:438–450. https://doi.org/10.1111/jcal.12032

De Belli N (2009) Bibliometrics and citation analysis: from the science citation index to cybermetrics, Scarecrow Press. https://doi.org/10.1111/jcal.12032

Domínguez A, Saenz-de-Navarrete J, de-Marcos L, Fernández-Sanz L, Pagés C, Martínez-Herráiz JJ(2013) Gamifying learning experiences: practical implications and outcomes Comput Educ 63:380–392. https://doi.org/10.1016/j.compedu.2012.12.020

Donnison S (2009) Discourses in conflict: the relationship between Gen Y pre-service teachers, digital technologies and lifelong learning. Australasian J Educ Technol 25. https://doi.org/10.14742/ajet.1138

Durfee SM, Jain S, Shaffer K (2003) Incorporating electronic media into medical student education. Acad Radiol 10:205–210. https://doi.org/10.1016/s1076-6332(03)80046-6

Dzikowski P(2018) A bibliometric analysis of born global firms J Bus Res 85:281–294. https://doi.org/10.1016/j.jbusres.2017.12.054

van Eck NJ, Waltman L(2009) Software survey: VOSviewer, a computer program for bibliometric mapping Scientometrics 84:523–538 https://doi.org/10.1007/s11192-009-0146-3

Edwards S(2013) Digital play in the early years: a contextual response to the problem of integrating technologies and play-based pedagogies in the early childhood curriculum Eur Early Child Educ Res J 21:199–212. https://doi.org/10.1080/1350293x.2013.789190

Edwards S(2015) New concepts of play and the problem of technology, digital media and popular-culture integration with play-based learning in early childhood education Technol Pedagogy Educ 25:513–532 https://doi.org/10.1080/1475939x.2015.1108929

Article   MathSciNet   Google Scholar  

Eisenberg MB(2008) Information literacy: essential skills for the information age DESIDOC J Libr Inf Technol 28:39–47. https://doi.org/10.14429/djlit.28.2.166

Forde C, OBrien A (2022) A literature review of barriers and opportunities presented by digitally enhanced practical skill teaching and learning in health science education. Med Educ Online 27. https://doi.org/10.1080/10872981.2022.2068210

García-Morales VJ, Garrido-Moreno A, Martín-Rojas R (2021) The transformation of higher education after the COVID disruption: emerging challenges in an online learning scenario. Front Psychol 12. https://doi.org/10.3389/fpsyg.2021.616059

Garfield E(2006) The history and meaning of the journal impact factor JAMA 295:90. https://doi.org/10.1001/jama.295.1.90

Article   PubMed   Google Scholar  

Garzón-Artacho E, Sola-Martínez T, Romero-Rodríguez JM, Gómez-García G(2021) Teachers’ perceptions of digital competence at the lifelong learning stage Heliyon 7:e07513. https://doi.org/10.1016/j.heliyon.2021.e07513

Gaviria-Marin M, Merigó JM, Baier-Fuentes H(2019) Knowledge management: a global examination based on bibliometric analysis Technol Forecast Soc Change 140:194–220. https://doi.org/10.1016/j.techfore.2018.07.006

Gilster P, Glister P (1997) Digital literacy. Wiley Computer Pub, New York

Greenhow C, Lewin C(2015) Social media and education: reconceptualizing the boundaries of formal and informal learning Learn Media Technol 41:6–30. https://doi.org/10.1080/17439884.2015.1064954

Hawkins DT(2001) Bibliometrics of electronic journals in information science Infor Res 7(1):7–1. http://informationr.net/ir/7-1/paper120.html

Henderson M, Selwyn N, Finger G, Aston R(2015) Students’ everyday engagement with digital technology in university: exploring patterns of use and “usefulness J High Educ Policy Manag 37:308–319 https://doi.org/10.1080/1360080x.2015.1034424

Huang CK, Neylon C, Hosking R, Montgomery L, Wilson KS, Ozaygen A, Brookes-Kenworthy C (2020) Evaluating the impact of open access policies on research institutions. eLife 9. https://doi.org/10.7554/elife.57067

Hwang GJ, Tsai CC(2011) Research trends in mobile and ubiquitous learning: a review of publications in selected journals from 2001 to 2010 Br J Educ Technol 42:E65–E70. https://doi.org/10.1111/j.1467-8535.2011.01183.x

Hwang GJ, Wu PH, Zhuang YY, Huang YM(2013) Effects of the inquiry-based mobile learning model on the cognitive load and learning achievement of students Interact Learn Environ 21:338–354. https://doi.org/10.1080/10494820.2011.575789

Jiang S, Ning CF (2022) Interactive communication in the process of physical education: are social media contributing to the improvement of physical training performance. Universal Access Inf Soc, 1–10. https://doi.org/10.1007/s10209-022-00911-w

Jing Y, Zhao L, Zhu KK, Wang H, Wang CL, Xia Q(2023) Research landscape of adaptive learning in education: a bibliometric study on research publications from 2000 to 2022 Sustainability 15:3115–3115. https://doi.org/10.3390/su15043115

Jing Y, Wang CL, Chen Y, Wang H, Yu T, Shadiev R (2023b) Bibliometric mapping techniques in educational technology research: a systematic literature review. Educ Inf Technol 1–29. https://doi.org/10.1007/s10639-023-12178-6

Krishnamurthy S (2020) The future of business education: a commentary in the shadow of the Covid-19 pandemic. J Bus Res. https://doi.org/10.1016/j.jbusres.2020.05.034

Kumar S, Lim WM, Pandey N, Christopher Westland J (2021) 20 years of electronic commerce research. Electron Commer Res 21:1–40

Kyza EA, Georgiou Y(2018) Scaffolding augmented reality inquiry learning: the design and investigation of the TraceReaders location-based, augmented reality platform Interact Learn Environ 27:211–225. https://doi.org/10.1080/10494820.2018.1458039

Laurillard D(2008) Technology enhanced learning as a tool for pedagogical innovation J Philos Educ 42:521–533. https://doi.org/10.1111/j.1467-9752.2008.00658.x

Li M, Yu Z (2023) A systematic review on the metaverse-based blended English learning. Front Psychol 13. https://doi.org/10.3389/fpsyg.2022.1087508

Luo H, Li G, Feng Q, Yang Y, Zuo M (2021) Virtual reality in K-12 and higher education: a systematic review of the literature from 2000 to 2019. J Comput Assist Learn. https://doi.org/10.1111/jcal.12538

Margaryan A, Littlejohn A, Vojt G(2011) Are digital natives a myth or reality? University students’ use of digital technologies Comput Educ 56:429–440. https://doi.org/10.1016/j.compedu.2010.09.004

McMillan S(1996) Literacy and computer literacy: definitions and comparisons Comput Educ 27:161–170. https://doi.org/10.1016/s0360-1315(96)00026-7

Mo CY, Wang CL, Dai J, Jin P (2022) Video playback speed influence on learning effect from the perspective of personalized adaptive learning: a study based on cognitive load theory. Front Psychology 13. https://doi.org/10.3389/fpsyg.2022.839982

Moorhouse BL (2021) Beginning teaching during COVID-19: newly qualified Hong Kong teachers’ preparedness for online teaching. Educ Stud 1–17. https://doi.org/10.1080/03055698.2021.1964939

Moorhouse BL, Wong KM (2021) The COVID-19 Pandemic as a catalyst for teacher pedagogical and technological innovation and development: teachers’ perspectives. Asia Pac J Educ 1–16. https://doi.org/10.1080/02188791.2021.1988511

Moskal P, Dziuban C, Hartman J (2013) Blended learning: a dangerous idea? Internet High Educ 18:15–23

Mughal MY, Andleeb N, Khurram AFA, Ali MY, Aslam MS, Saleem MN (2022) Perceptions of teaching-learning force about Metaverse for education: a qualitative study. J. Positive School Psychol 6:1738–1745

Mustapha I, Thuy Van N, Shahverdi M, Qureshi MI, Khan N (2021) Effectiveness of digital technology in education during COVID-19 pandemic. a bibliometric analysis. Int J Interact Mob Technol 15:136

Nagle J (2018) Twitter, cyber-violence, and the need for a critical social media literacy in teacher education: a review of the literature. Teach Teach Education 76:86–94

Nazare J, Woolf A, Sysoev I, Ballinger S, Saveski M, Walker M, Roy D (2022) Technology-assisted coaching can increase engagement with learning technology at home and caregivers’ awareness of it. Comput Educ 188:104565

Nguyen UP, Hallinger P (2020) Assessing the distinctive contributions of simulation & gaming to the literature, 1970-2019: a bibliometric review. Simul Gaming 104687812094156. https://doi.org/10.1177/1046878120941569

Nygren H, Nissinen K, Hämäläinen R, Wever B(2019) Lifelong learning: formal, non-formal and informal learning in the context of the use of problem-solving skills in technology-rich environments Br J Educ Technol 50:1759–1770. https://doi.org/10.1111/bjet.12807

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, Moher D (2021) The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Int J Surg 88:105906

Pan SL, Zhang S(2020) From fighting COVID-19 pandemic to tackling sustainable development goals: an opportunity for responsible information systems research Int J Inf Manage 55:102196. https://doi.org/10.1016/j.ijinfomgt.2020.102196

Pan X, Yan E, Cui M, Hua W(2018) Examining the usage, citation, and diffusion patterns of bibliometric mapping software: a comparative study of three tools J Informetr 12:481–493. https://doi.org/10.1016/j.joi.2018.03.005

Parris Z, Cale L, Harris J, Casey A (2022) Physical activity for health, covid-19 and social media: what, where and why?. Movimento, 28. https://doi.org/10.22456/1982-8918.122533

Pasquini LA, Evangelopoulos N (2016) Sociotechnical stewardship in higher education: a field study of social media policy documents. J Comput High Educ 29:218–239

Pérez-Sanagustín M, Hernández-Leo D, Santos P, Delgado Kloos C, Blat J(2014) Augmenting reality and formality of informal and non-formal settings to enhance blended learning IEEE Trans Learn Technol 7:118–131. https://doi.org/10.1109/TLT.2014.2312719

Pinto M, Leite C (2020) Digital technologies in support of students learning in Higher Education: literature review. Digital Education Review 343–360. https://doi.org/10.1344/der.2020.37.343-360

Pires F, Masanet MJ, Tomasena JM, Scolari CA(2022) Learning with YouTube: beyond formal and informal through new actors, strategies and affordances Convergence 28(3):838–853. https://doi.org/10.1177/1354856521102054

Pritchard A (1969) Statistical bibliography or bibliometrics 25:348

Romero M, Romeu T, Guitert M, Baztán P (2021) Digital transformation in higher education: the UOC case. In ICERI2021 Proceedings (pp. 6695–6703). IATED https://doi.org/10.21125/iceri.2021.1512

Romero-Hall E, Jaramillo Cherrez N (2022) Teaching in times of disruption: faculty digital literacy in higher education during the COVID-19 pandemic. Innovations in Education and Teaching International 1–11. https://doi.org/10.1080/14703297.2022.2030782

Rospigliosi PA(2023) Artificial intelligence in teaching and learning: what questions should we ask of ChatGPT? Interactive Learning Environments 31:1–3. https://doi.org/10.1080/10494820.2023.2180191

Salas-Pilco SZ, Yang Y, Zhang Z(2022) Student engagement in online learning in Latin American higher education during the COVID-19 pandemic: a systematic review. Br J Educ Technol 53(3):593–619. https://doi.org/10.1111/bjet.13190

Selwyn N(2009) The digital native-myth and reality In Aslib proceedings 61(4):364–379. https://doi.org/10.1108/00012530910973776

Selwyn N(2012) Making sense of young people, education and digital technology: the role of sociological theory Oxford Review of Education 38:81–96. https://doi.org/10.1080/03054985.2011.577949

Selwyn N, Facer K(2014) The sociology of education and digital technology: past, present and future Oxford Rev Educ 40:482–496. https://doi.org/10.1080/03054985.2014.933005

Selwyn N, Banaji S, Hadjithoma-Garstka C, Clark W(2011) Providing a platform for parents? Exploring the nature of parental engagement with school Learning Platforms J Comput Assist Learn 27:314–323. https://doi.org/10.1111/j.1365-2729.2011.00428.x

Selwyn N, Aagaard J (2020) Banning mobile phones from classrooms-an opportunity to advance understandings of technology addiction, distraction and cyberbullying. Br J Educ Technol 52. https://doi.org/10.1111/bjet.12943

Selwyn N, O’Neill C, Smith G, Andrejevic M, Gu X (2021) A necessary evil? The rise of online exam proctoring in Australian universities. Media Int Austr 1329878X2110058. https://doi.org/10.1177/1329878x211005862

Selwyn N, Pangrazio L, Nemorin S, Perrotta C (2019) What might the school of 2030 be like? An exercise in social science fiction. Learn, Media Technol 1–17. https://doi.org/10.1080/17439884.2020.1694944

Selwyn, N (2016) What works and why?* Understanding successful technology enabled learning within institutional contexts 2016 Final report Appendices (Part B). Monash University Griffith University

Sjöberg D, Holmgren R (2021) Informal workplace learning in swedish police education-a teacher perspective. Vocations and Learning. https://doi.org/10.1007/s12186-021-09267-3

Strotmann A, Zhao D (2012) Author name disambiguation: what difference does it make in author-based citation analysis? J Am Soc Inf Sci Technol 63:1820–1833

Article   CAS   Google Scholar  

Sutherland R, Facer K, Furlong R, Furlong J(2000) A new environment for education? The computer in the home. Comput Educ 34:195–212. https://doi.org/10.1016/s0360-1315(99)00045-7

Szeto E, Cheng AY-N, Hong J-C(2015) Learning with social media: how do preservice teachers integrate YouTube and Social Media in teaching? Asia-Pac Educ Res 25:35–44. https://doi.org/10.1007/s40299-015-0230-9

Tang E, Lam C(2014) Building an effective online learning community (OLC) in blog-based teaching portfolios Int High Educ 20:79–85. https://doi.org/10.1016/j.iheduc.2012.12.002

Taskin Z, Al U(2019) Natural language processing applications in library and information science Online Inf Rev 43:676–690. https://doi.org/10.1108/oir-07-2018-0217

Tegtmeyer K, Ibsen L, Goldstein B(2001) Computer-assisted learning in critical care: from ENIAC to HAL Crit Care Med 29:N177–N182. https://doi.org/10.1097/00003246-200108001-00006

Article   CAS   PubMed   Google Scholar  

Timotheou S, Miliou O, Dimitriadis Y, Sobrino SV, Giannoutsou N, Cachia R, Moné AM, Ioannou A(2023) Impacts of digital technologies on education and factors influencing schools' digital capacity and transformation: a literature review. Educ Inf Technol 28(6):6695–6726. https://doi.org/10.1007/s10639-022-11431-8

Trujillo Maza EM, Gómez Lozano MT, Cardozo Alarcón AC, Moreno Zuluaga L, Gamba Fadul M (2016) Blended learning supported by digital technology and competency-based medical education: a case study of the social medicine course at the Universidad de los Andes, Colombia. Int J Educ Technol High Educ 13. https://doi.org/10.1186/s41239-016-0027-9

Turin O, Friesem Y(2020) Is that media literacy?: Israeli and US media scholars’ perceptions of the field J Media Lit Educ 12:132–144. https://doi.org/10.1007/s11192-009-0146-3

Van Eck NJ, Waltman L (2019) VOSviewer manual. Universiteit Leiden

Vratulis V, Clarke T, Hoban G, Erickson G(2011) Additive and disruptive pedagogies: the use of slowmation as an example of digital technology implementation Teach Teach Educ 27:1179–1188. https://doi.org/10.1016/j.tate.2011.06.004

Wang CL, Dai J, Xu LJ (2022) Big data and data mining in education: a bibliometrics study from 2010 to 2022. In 2022 7th International Conference on Cloud Computing and Big Data Analytics ( ICCCBDA ) (pp. 507-512). IEEE. https://doi.org/10.1109/icccbda55098.2022.9778874

Wang CL, Dai J, Zhu KK, Yu T, Gu XQ (2023) Understanding the continuance intention of college students toward new E-learning spaces based on an integrated model of the TAM and TTF. Int J Hum-Comput Int 1–14. https://doi.org/10.1080/10447318.2023.2291609

Wong L-H, Boticki I, Sun J, Looi C-K(2011) Improving the scaffolds of a mobile-assisted Chinese character forming game via a design-based research cycle Comput Hum Behav 27:1783–1793. https://doi.org/10.1016/j.chb.2011.03.005

Wu R, Yu Z (2023) Do AI chatbots improve students learning outcomes? Evidence from a meta-analysis. Br J Educ Technol. https://doi.org/10.1111/bjet.13334

Yang D, Zhou J, Shi D, Pan Q, Wang D, Chen X, Liu J (2022) Research status, hotspots, and evolutionary trends of global digital education via knowledge graph analysis. Sustainability 14:15157–15157. https://doi.org/10.3390/su142215157

Yu T, Dai J, Wang CL (2023) Adoption of blended learning: Chinese university students’ perspectives. Humanit Soc Sci Commun 10:390. https://doi.org/10.3390/su142215157

Yu Z (2022) Sustaining student roles, digital literacy, learning achievements, and motivation in online learning environments during the COVID-19 pandemic. Sustainability 14:4388. https://doi.org/10.3390/su14084388

Za S, Spagnoletti P, North-Samardzic A(2014) Organisational learning as an emerging process: the generative role of digital tools in informal learning practices Br J Educ Technol 45:1023–1035. https://doi.org/10.1111/bjet.12211

Zhang X, Chen Y, Hu L, Wang Y (2022) The metaverse in education: definition, framework, features, potential applications, challenges, and future research topics. Front Psychol 13:1016300. https://doi.org/10.3389/fpsyg.2022.1016300

Zhou M, Dzingirai C, Hove K, Chitata T, Mugandani R (2022) Adoption, use and enhancement of virtual learning during COVID-19. Education and Information Technologies. https://doi.org/10.1007/s10639-022-10985-x

Download references

Acknowledgements

This research was supported by the Zhejiang Provincial Social Science Planning Project, “Mechanisms and Pathways for Empowering Classroom Teaching through Learning Spaces under the Strategy of High-Quality Education Development”, the 2022 National Social Science Foundation Education Youth Project “Research on the Strategy of Creating Learning Space Value and Empowering Classroom Teaching under the background of ‘Double Reduction’” (Grant No. CCA220319) and the National College Student Innovation and Entrepreneurship Training Program of China (Grant No. 202310337023).

Author information

Authors and affiliations.

College of Educational Science and Technology, Zhejiang University of Technology, Zhejiang, China

Chengliang Wang, Xiaojiao Chen, Yidan Liu & Yuhui Jing

Graduate School of Business, Universiti Sains Malaysia, Minden, Malaysia

Department of Management, The Chinese University of Hong Kong, Hong Kong, China

College of Humanities and Social Sciences, Beihang University, Beijing, China

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization: Y.J., C.W.; methodology, C.W.; software, C.W., Y.L.; writing-original draft preparation, C.W., Y.L.; writing-review and editing, T.Y., Y.L., C.W.; supervision, X.C., T.Y.; project administration, Y.J.; funding acquisition, X.C., Y.L. All authors read and approved the final manuscript. All authors have read and approved the re-submission of the manuscript.

Corresponding author

Correspondence to Yuhui Jing .

Ethics declarations

Ethical approval.

Ethical approval was not required as the study did not involve human participants.

Informed consent

Informed consent was not required as the study did not involve human participants.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Wang, C., Chen, X., Yu, T. et al. Education reform and change driven by digital technology: a bibliometric study from a global perspective. Humanit Soc Sci Commun 11 , 256 (2024). https://doi.org/10.1057/s41599-024-02717-y

Download citation

Received : 11 July 2023

Accepted : 17 January 2024

Published : 12 February 2024

DOI : https://doi.org/10.1057/s41599-024-02717-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

A meta-analysis of learners’ continuance intention toward online education platforms.

  • Chengliang Wang

Education and Information Technologies (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

conclusion about technology in education

Advertisement

Advertisement

A Comprehensive Review of Educational Technology on Objective Learning Outcomes in Academic Contexts

  • Review Article
  • Published: 05 April 2021
  • Volume 33 , pages 1583–1630, ( 2021 )

Cite this article

conclusion about technology in education

  • Kam Leung Yeung 1 ,
  • Shana K. Carpenter 1 &
  • Daniel Corral 2  

3846 Accesses

24 Citations

37 Altmetric

Explore all metrics

Rapid advances in technology during the last few decades have provided a multitude of new options for teaching and learning. Although technology is being widely adopted in education, there is a shortage of research on the effects that this technology might have on student learning, and why those effects occur. We conducted a comprehensive review of the literature on various uses of digital technology in educational settings, and the effects of that technology on students’ objective learning outcomes. We interpret these effects within the context of empirical research on effective principles of learning, and the extent to which the affordances of technology permit opportunities for increased engagement with the material, retrieval practice, and spacing. Results revealed that technology is neither beneficial nor harmful for learning when used primarily as a means of presenting information (e.g., information viewed on a computer screen vs. on paper), but can be beneficial when it involves unique affordances that leverage effective learning principles. We discues these findings in light of the ever-increasing availability of technology in education, and the importance of evidence-guided criteria in decisions about adoption and implementation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

conclusion about technology in education

Similar content being viewed by others

conclusion about technology in education

Educational Technology and Response to Intervention: Affordances and Considerations

conclusion about technology in education

Technology Integration in Schools

conclusion about technology in education

Technology Education: History of Research

Explore related subjects.

  • Artificial Intelligence

Even with the same instructor across all conditions, there is a possibility that some instructor-related factors could change across conditions or across time (e.g., instructors could improve their teaching effectiveness from one term to the next, or have difficulty implementing a new technology). Notwithstanding these possibilities, instructor-related factors that could influence student learning are likely to be greater when there are different instructors across the conditions (e.g., bringing differences in teaching style, personality, grading practices, or experience), such that the potential influence of these factors was minimized by ensuring that the same instructor taught all students.

In these studies it cannot be determined whether the immediacy of the feedback per se was responsible for the learning gains. Some studies have directly explored the timing of feedback and have found that feedback can be more beneficial for learning some types of materials—particularly those involving non-overlapping materials—when it is delayed rather than provided immediately (Carpenter and Vul 2011 ; Corral et al. in press ). In the studies reviewed here, however, the answer to any one item (such as a math problem or grammatical rule) could have informed students’ answers to subsequent problems of the same type. Beyond the timing of feedback per se, therefore, the immediacy of the correct answers could have changed the way that students approached subsequent questions of the same type, increasing the likelihood that they would apply the correct answer.

A third group was included that used 3-D printers but did not receive the same type of lecture-based guidance from the instructor. Due to the difference in instructional procedures, this “experiential learning” group is not included in the comparisons.

* indicates articles included in the review.

* Anderson, G. R., & Vander Meer, A. W. (1954). A comparative study on the effectiveness of lessons on the slide rule presented via television and in person. The Mathematics Teacher, 47, 323–327.

* Anderson, H. G., Frazier, L., Anderson, S. L., Stanton, R., Gillette, C., Broedel-Zaugg, K., & Yingling, K. (2017). Comparison of pharmaceutical calculations learning outcomes achieved within a traditional lecture or flipped classroom andragogy. American Journal of Pharmaceutical Education, 81, 1-9.

* Arias, J. J., Swinton, J., & Anderson, K. (2018). Online vs. face-to-face: A comparison of student outcomes with random assignment. e-Journal of Business Education & Scholarship of Teaching, 12, 1-23.

* Arús, N. A., da Silva, A. M., Duarte, R., da Silveira, P. F., Vizzotto, M. B., da Silveira, H. L. D., & da Silveira, H. E. D. (2017). Teaching dental students to understand the temporomandibular joint using MRI: Comparison of conventional and digital learning methods. Journal of Dental Education, 81, 752-758.

* Baumann-Birkbeck, L., Karaksha, A., Anoopkumar-Dukie, S., Grant, G., Davey, A., Nirthanan, S., & Owen, S. (2015). Benefits of e-learning in chemotherapy pharmacology education. Currents in Pharmacy Teaching & Learning, 7, 106-111.

Benjamin, L. T. (1988). A history of teaching machines. American Psychologist, 43 , 703–712.

* Blázquez, B. O., Masluk, B., Gascon, S., Díaz, R. F., Aguilar-Latorre, A., Magallón, I. A., & Botaya, R. M. (2019). The use of flipped classroom as an active learning approach improves academic performance in social work: A randomized trial in a university. PLOS ONE, 14, e0214623.

* Boblick, J. M. (1972). Writing chemical formulas: A comparison of computer assisted instruction with traditional teaching techniques. Science Education, 56, 221-225.

* Bortnik, B., Stozhko, N., Pervukhina, I., Tchernysheva, A., & Belysheva, G. (2017). Effect of virtual analytical chemistry laboratory on enhancing student research skills and practices. Research in Learning Technology, 25, 1-20.

* Botezatu, M., Hult, H., Tessma, M. K., & Fors, U. G. H. (2010). Virtual patient simulation for learning and assessment: Superior results in comparison with regular course exams. Medical Teacher, 32, 845-850.

* Bryner, B. S., Saddawi-Konefka, D., Gest, T. R. (2008). The impact of interactive, computerized educational modules on preclinical medical education. Anatomical Sciences Education, 1, 247-251.

* Cakir, O., & Simsek, N. (2010). A comparative analysis of the effects of computer and paper-based personalization on student achievement. Computers & Education, 55, 1524-1531.

* Campbell, D. L., Peck, D. L., Horn, C. J., & Leigh, R. K. (1987). Comparison of computer-assisted instruction and print drill performance: A research note. Educational Communication & Technology, 35, 95-103.

Carpenter, S. K. (2009). Cue strength as a moderator of the testing effect: The benefits of elaborative retrieval. Journal of Experimental Psychology: Learning, Memory, & Cognition, 35 , 1563–1569.

Carpenter, S. K. (2011). Semantic information activated during retrieval contributes to later retention: Support for the mediator effectiveness hypothesis of the testing effect. Journal of Experimental Psychology: Learning, Memory, & Cognition, 37 , 1547–1552.

Carpenter, S. K. (2014). Spacing and interleaving of study and practice. In V. A. Benassi, C. E. Overson, & C. M. Hakala (Eds.), Applying the science of learning in education: Infusing psychological science into the curriculum (pp. 131-141) . American Psychological Association.

Carpenter, S. K. (2017). Spacing effects in learning and memory. In J. T. Wixted (Ed.), Cognitive psychology of memory, Vol. 2, Learning & memory: A comprehensive reference, 2 nd edition, J. H. Byrne (Ed.), pp. 465-485. Oxford: Academic Press.

Carpenter, S. K. (2020). Distributed practice/spacing effect. In L.-f. Zhang (Ed.), Oxford Research Encyclopedia of Education . Oxford University Press.

Carpenter, S. K., & Vul, E. (2011). Delaying feedback by three seconds benefits retention of face-name pairs: The role of active anticipatory processing. Memory & Cognition, 39 , 1211–1221.

Carpenter, S. K., Cepeda, N. J., Rohrer, D., Kang, S. H. K., & Pashler, H. (2012). Using spacing to enhance diverse forms of learning: Review of recent research and implications for instruction. Educational Psychology Review, 24 , 369–378.

Carpenter, S. K., Rahman, S., & Perkins, K. (2018). The effects of prequestions on classroom learning. Journal of Experimental Psychology: Applied, 24 , 34–42.

Cepeda, N. J., Pasher, H., Vul, E., Wixted, J. T., & Rohrer, D. (2006). Distributed practice in verbal recall tasks: A review and quantitative synthesis. Psychological Bulletin, 132 , 354–380.

* Cerra, P. P., González, J. M. S., Parra, B. B., Ortiz, D. R., & Peñín, P. I. A. (2014). Can interactive web-based CAD tools improve the learning of engineering drawing? A case study. Journal of Science Education Technology, 23, 398-411.

* Chang, C.-Y. (2000). Enhancing tenth graders’ earth-science learning through computer-assisted instruction. Journal of Geoscience Education, 48, 636-640.

* Chang, R-C., & Yu, Z-S. (2018). Using augmented reality technologies to enhance students’ engagement and achievement in science laboratories. International Journal of Distance Education Technologies, 16, 54-72.

* Chang, K.-E., Wu, L.-J., Lai, S.-C., & Sung, Y.-T. (2016). Using mobile devices to enhance the interactive learning for spatial geometry. Interactive Learning Environments, 24, 916-934.

* Chen, J. C., Kadlowec, J. A., & Whittinghill, D. C. (2008). Using handheld computers for instantaneous feedback to enhance student learning and promote interaction. International Journal of Engineering Education, 24, 616-624.

Clunie, L., Morris, N. P., Joynes, V. C. T., & Pickering, J. D. (2018). How comprehensive are research studies investigating the efficacy of technology-enhanced learning resources in anatomy education? A systematic review. Anatomical Sciences Education, 11 , 303–319.

Corral, D., Carpenter, S. K., Perkins, K., & Gentile, D. A. (2020). Assessing students’ use of optional online lecture reviews. Applied Cognitive Psychology, 34 , 318–329.

Corral, D., Carpenter, S. K., & Clingan-Siverly, S. (in press). The effects of immediate versus delayed explanatory feedback on complex concept learning. Quarterly Journal of Experimental Psychology.

Cuban, L. (1986). Teachers and machines: The classroom use of technology since 1920 . New York: Teachers College Press.

* Daly, C. J., Bulloch, J. M., Ma, M., & Aidulis, D. (2016). A comparison of animated versus static images in an instructional multimedia presentation. Advances in Physiology Education, 40, 201-205.

* Debevc, M., Weiss, J., Šorgo, A., & Kožuh, I. (2020). Solfeggio learning and the influence of a mobile application based on visual, auditory and tactile modalities. British Journal of Educational Technology, 51, 177-193.

* Delafuente, J. C., Araujo, O. E., & Legg, S. M. (1998). Traditional lecture format compared to computer-assisted instruction in pharmacy calculations. American Journal of Pharmaceutical Education, 62, 62-66.

Delaney, P. F., Verkoeijen, P. P. J. L., & Spirgel, A. (2010). Spacing and testing effects: A deeply critical, lengthy, and at times discursive review of the literature. In B. H. Ross (Ed.), The psychology of learning & motivation: Advances in research & theory (Vol. 53 , pp. 63–147). New York: Academic Press.

Deslauriers, L., McCarty, L. S., Miller, K., Callaghan, K., & Kestin, G. (2019). Measuring actual learning versus feeling of learning in response to being actively engaged in the classroom. Proceedings of the National Academy of Sciences, 116 , 19251–19257.

* Dewhurst, D. G., Hardcastle, J., Hardcastle, P. T., & Stuart, E. (1994). Comparison of a computer simulation program and a traditional laboratory practical class for teaching the principles of intestinal absorption. Educational Experiments, 12, 95-104.

* Diliberto-Macaluso, K., & Hughes, A. (2016). The use of mobile apps to enhance student learning in introduction to psychology. Teaching of Psychology, 43, 48-52.

* Dorji, U., Panjaburee, P., & Srisawasdi, N. (2015). A learning cycle approach to developing educational computer game for improving students’ learning and awareness in electric energy consumption and conservation. Educational Technology & Society, 18, 91-105.

* Du, C. (2011). A comparison of traditional and blended learning in introductory principles of accounting course. American Journal of Business Education, 4, 1-10.

* Ebadi, S., & Ghuchi, K. D. (2018). Investigating the effects of blended learning approach on vocabulary enhancement from EFL learners’ perspective. i-Manager’s Journal on English Language Teaching, 8, 57-68.

* Ebadi, S., & Rahimi, M. (2017). Exploring the impact of online peer-editing using google docs on EFL learners’ academic writing skills: A mixed methods study. Computer Assisted Language Learning, 30, 787-815.

* Ebadi, S., & Rahimi, M. (2018). An exploration into the impact of WebQuest-based classroom on EFL learners’ critical thinking and academic writing skills: A mixed methods study. Computer Assisted Language Learning, 31, 617-651.

Ebbinghaus, H. (1885/1913). Memory (H. A. Ruger, C. E. Bussenius, Transl.). Teachers College, Columbia University, New York.

* Edwards, C. M., Rule, A. C., & Boody, R. M. (2013). Comparison of face-to-face and online mathematics learning of sixth graders. Journal of Computers in Mathematics & Science Teaching, 32, 25-47.

* Ellinger, R. S., & Frankland, P. (1976). Computer-assisted and lecture instruction: A comparative experiment. Journal of Geography, 75, 109-120.

* Englert, C. S., Zhao, Y., Collings, N., & Romig, N. (2005). Learning to read words: The effects of internet-based software on the improvement of reading performance. Remedial & Special Education, 26, 357-371.

* Fajardo-Lira, C., & Heiss, C. (2006). Comparing the effectiveness of a supplemental computer-based food safety tutorial to traditional education in an introductory food science course. Journal of Food Science Education, 5, 31-33.

Fernandez, J., & Jamet, E. (2017). Extending the testing effect to self-regulated learning. Metacognition & Learning, 12 , 131–156.

* Francescucci, A., & Foster, M. (2013). The VIRI (virtual, interactive, real-time, instructor-led) classroom: The impact of blended Synchronous online courses on student performance, engagement, and satisfaction. Canadian Journal of Higher Education, 43, 78-91.

* Francescucci, A., & Rohani, L. (2019). Exclusively synchronous online (VIRI) learning: The impact on student performance and engagement outcomes. Journal of Marketing Education, 41, 60-69.

Geller, J., Carpenter, S. K., Lamm, M. H., Rahman, S., Armstrong, P. I., & Coffman, C. R. (2017). Prequestions do not enhance the benefits of retrieval in a STEM classroom. Cognitive Research: Principles & Implications, 2 , 42.

Gerbier, E., & Toppino, T. C. (2015). The effect of distributed practice: Neuroscience, cognition, and education. Trends in Neuroscience & Education, 4 , 49–59.

* Gibbons, N. J., Evans, C., Payne, A., Shah, K., & Griffin, D. K. (2004). Computer simulations improve university instructional laboratories. Cell Biology Education, 3, 263-269.

* Goh, C. F., & Ong, E. T. (2019). Flipped classroom as an effective approach in enhancing student learning of a pharmacy course with a historically low student pass rate. Currents in Pharmacy Teaching & Learning, 11, 621-629.

Golonka, E. M., Bowles, A. R., Frank, V. M., Richardson, D. L., & Freynik, S. (2014). Technologies for foreign language learning: A review of technology types and their effectiveness. Computer Assisted Language Learning, 27 , 70–105.

* González, J. A., Jover, L. Cobo, E., & Muñoz, P. (2010). A web-based learning tool improves student performance in statistics: A randomized masked trial. Computers & Education, 55, 704-713.

Gray, L., Thomas, N., & Lewis, L. (2010). Teachers’ use of educational technology in US public schools: 2009. First look. NCES 2010-040 . Washington, DC: National Center for Education Statistics, Institute of Education Sciences, US Department of Education.

Grgurović, M., Chapelle, C. A., & Shelley, M. C. (2013). A meta-analysis of effectiveness studies on computer technology-supported language learning. ReCALL, 25 , 165–198.

* Hahn, W., Fairchild, C., & Dowis, W. B. (2013). Online homework managers and intelligent tutoring systems: A study of their impact on student learning in the introductory financial accounting classroom. Issues in Accounting Education, 28, 513-535.

* Harrington, D. (1999). Teaching statistics: A comparison of traditional classroom and programmed instruction/distance learning approaches. Journal of Social Work Education, 35, 343-352.

* Hollerbach, K., & Mims, B. (2007). Choosing wisely: A comparison of online, televised, and face-to-face instructional methods on knowledge acquisition of broadcast audience concepts. Journalism & Mass Communication Educator, 62, 176-189.

* Hsiao, H-S., Chen, J-C., Lin, C-Y., Zhuo, P-W., & Lin, K-Y. (2019). Using 3D printing technology with experiential learning strategies to improve preengineering students’ comprehension of abstract scientific concepts and hands-on ability. Journal of Computer Assisted Learning, 35, 178-187.

* Huang, H.-C. (2014). Online versus paper-based instruction: Comparing two strategy training modules for improving reading comprehension. RELC Journal, 45, 165-180.

* Jeffries, P. R. (2001). Computer versus lecture: A comparison of two methods of teaching oral medication administration in a nursing skills laboratory. Journal of Nursing Education, 40, 323-329.

* Johnson, S. D., Aragon, S. R., Shaik, N., & Palma-Rivas, N. (2000). Comparative analysis of learner satisfaction and learning outcomes in online and face-to-face learning environments. Journal of Interactive Learning Research, 11, 29-49.

* Johnson, D., Burnett, M., & Rolling, P. (2002). Comparison of internet and traditional classroom instruction in a consumer economics course. Journal of Family & Consumer Sciences Education, 20, 20-28.

* Karaksha, A., Grant, G., Nirthanan, S. N., Davey, A. K., & Anoopkumar-Dukie, S. (2014). A comparative study to evaluate the educational impact of e-learning tools on Griffith University pharmacy students’ level of understanding using Bloom’s and SOLO taxonomies. Education Research International, 1-11.

Karpicke, J. D. (2017). Retrieval-based learning: A decade of progress. In J. T. Wixted (Ed.), Cognitive psychology of memory, Vol. 2. Learning and memory: A comprehensive reference (J. H. Byrne, Series Ed.), pp. 487-514. Oxford: Academic Press.

* Kiliçkaya, F. (2015). Computer-based grammar instruction in an EFL context: Improving the effectiveness of teaching adverbial clauses. Computer Assisted Language Learning, 28, 325-340.

Kirkwood, A., & Price, L. (2013). Missing: Evidence of a scholarly approach to teaching and learning with technology in higher education. Teaching in Higher Education, 18 , 327–337.

Kirkwood, A., & Price, L. (2014). Technology-enhanced learning and teaching in higher education: What is ‘enhanced’ and how do we know? A critical literature review. Learning, Media, & Technology, 39 , 6–36.

Kornell, N., & Vaughn, K. E. (2016). How retrieval attempts affect learning: A review and synthesis. Psychology of Learning & Motivation, 65 , 183–215.

Kuepper-Tetzel, C. E. (2014). Strong effects on weak theoretical grounds: Understanding the distributed practice effect. Zeitschrift für Psychologie, 222 , 71–81.

* Kühl, T., & Münzer, S. (2019). The moderating role of additional information when learning with animations compared to static pictures. Instructional Science, 47, 659-677.

* Kunnath, B., & Kriek, J. (2018). Exploring effective pedagogies using computer simulations to improve grade 12 learners’ understanding of the photoelectric effect. African Journal of Research in Mathematics, Science & Technology Education, 22, 329-339.

* Lancellotti, M., Thomas, S., & Kohli, C. (2016). Online video modules for improvement in student learning. Journal of Education for Business, 91, 19-22.

Lee, S. W.-Y., & Tsai, C.-C. (2013). Technology-supported learning in secondary and undergraduate biological education: Observations from literature review. Journal of Science Education & Technology, 22 , 226–233.

* Lee, C. S. C., Rutecki, G. W., Whittier, F. C., Clarett, M. R., & Jarjoura, D. (1997). A comparison of interactive computerized medical education software with a more traditional teaching format. Teaching & Learning in Medicine, 9, 111-115.

* Lents, N. H., & Cifuentes, O. E. (2009). Web-based learning enhancements: Video lectures through voice-over powerpoint in a majors-level biology course. Journal of College Science Teaching, 39, 38-46.

* Lewis, J. L. (2015). A comparison between two different activities for teaching learning principles: Virtual animal labs versus human demonstrations. Scholarship of Teaching & Learning in Psychology, 1, 182-188.

Li, Q., & Ma, X. (2010). A meta-analysis of the effects of computer technology on school students’ mathematics learning. Educational Psychology Review, 22 , 215–243.

* Li, J-T., & Tong, F. (2019). Multimedia-assisted self-learning materials: The benefits of E-flashcards for vocabulary learning in Chinese as a foreign language. Reading & Writing, 32, 1175-1195.

* Lin, Y-T. (2019). Impacts of a flipped classroom with a smart learning diagnosis system on students’ learning performance, perception, and problem solving ability in a software engineering course. Computers in Human Behavior, 95, 187-196.

Little, J. L., & McDaniel, M. A. (2015). Metamemory monitoring and control following retrieval practice for text. Memory & Cognition, 43 , 85–98.

* Liu, H.-C., & Su, I.-H. (2011). Learning residential electrical wiring through computer simulation: The impact of computer-based learning environments on student achievement and cognitive load. British Journal of Educational Technology, 42, 598-607.

* Liu, T.-C., Lin, Y.-C., & Kinshuk. (2010). The application of simulation-assisted learning statistics (SALS) for correcting misconceptions and improving understanding of correlation. Journal of Computer Assisted Learning, 26, 143-158.

* Liu, K-P, Tai, S-J. D., & Liu, C-C. (2018). Enhancing language learning through creation: The effect of digital storytelling on student learning motivation and performance in a school English course. Educational Technology Research & Development, 66, 913-935.

* Lucchetti, A. L. G., Ezequiel, O. D. S., de Oliveira, I. N., Moreira-Almeida, A., & Lucchetti, G. (2018). Using traditional or flipped classrooms to teach “Geriatrics and Gerontology?” Investigating the impact of active learning on medical students’ competencies. Medical Teacher, 40, 1248-1256.

Lui, A. K.-F., Poon, M. H. M., & Wong, R. M. H. (2019). Automated generators of examples and problems for studying computer algorithms. Interactive Technology & Smart Education, 16 , 204–218.

* MacLaughlin, E. J., Supernaw, R. B., & Howard, K. A. (2004). Impact of distance learning using videoconferencing technology on student performance. American Journal of Pharmaceutical Education, 68, 58.

* Mathiowetz, V., Yu, C.-H., & Quake-Rapp, C. (2016). Comparison of a gross anatomy laboratory to online anatomy software for teaching anatomy. Anatomical Sciences Education, 9 , 52–59.

Mayer, R. E., & Moreno, R. (2002). Animation as an aid to multimedia learning. Educational Psychology Review, 14 , 87–99.

* McClean, P., Johnson, C., Rogers, R., Daniels, L., Reber, J., Slator, B. M., Terpstra, J., & White, A. (2005). Molecular and cellular biology animations: Development and impact on student learning. Cell Biology Education, 4, 169-175.

McDaniel, M. A., Agarwal, P. K., Huelser, B. J., McDermott, K. B., & Roediger III, H. L. (2011). Test-enhanced learning in a middle school science classroom: The effects of quiz frequency and placement. Journal of Educational Psychology, 103 , 399–414.

* McDonough, M., & Marks, I. M. (2002). Teaching medical students exposure therapy for phobia/panic – randomized, controlled comparison of face-to-face tutorial in small groups vs. solo computer instruction. Medical Education, 36, 412-417.

* McLaughlin, J. E., & Rhoney, D. H. (2015). Comparison of an interactive e-learning preparatory tool and a conventional downloadable handout used within a flipped neurologic pharmacotherapy lecture. Currents in Pharmacy Teaching & Learning, 7, 12-19.

* Mešić, V., Dervić, D., Gazibegović-Busuladžić, A., & Salibašić, D. (2015). Comparing the impact of dynamic and static media on students’ learning of one-dimensional kinematics. Eurasia Journal of Mathematics, Science & Technology Education, 11, 1119-1140.

* Nguyen, D. M., & Kulm, G. (2005). Using web-based practice to enhance mathematics learning and achievement. Journal of Interactive Online Learning, 100 1-16.

* Nguyen, J., & Paschal, C. B. (2002). Development of online ultrasound instructional module and comparison to traditional teaching methods. Journal of Engineering Education, 91, 275-283.

* Nikou, S. A., & Economides, A. A. (2018). Mobile-based micro-learning and assessment: Impact on learning performance and motivation of high school students. Journal of Computer Assisted Learning, 34, 269-278.

Nora, A., & Snyder, B. P. (2008). Technology and higher education: The impact of e-learning approaches on student academic achievement, perceptions and persistence. Journal of College Student Retention: Research, Theory & Practice, 10 , 3–19.

* Nouri, J., Cerratto-Pargman, T., Rossitto, C., & Ramberg, R. (2014). Learning with or without mobile devices? A comparison of traditional school fieldtrips and inquiry-based mobile learning activities. Research & Practice in Technology Enhanced Education, 9, 241-262.

* Oglesbee, T. W., Bitner, L. N., & Wright, G. B. (1988). Measurement of incremental benefits in computer enhanced instruction. Issues in Accounting Education, 3, 365-377.

* Olkun, S. (2003). Comparing computer versus concrete manipulatives in learning 2D geometry. Journal of Computers in Mathematics & Science Teaching, 22, 43-56.

* Pei, X., Jin, Y., Zheng, T., & Zhao, J. (2020). Longitudinal effect of a technology-enhanced learning environment on sixth-grade students’ science learning: The role of reflection. International Journal of Science Education, 42, 271-289.

* Perry, J. L., Cunningham, L. D., Gamage, J. K., & Kuehn, D. P. (2011). Do 3D stereoscopic computer animations improve student learning of surgical procedures? International Journal of Instructional Media, 38, 369-378.

Pressey, S. L. (1926). A simple apparatus which gives tests and scores—and teaches. School & Society, 23 , 373–376.

Pressey, S. L. (1927). A machine for automatic teaching of drill material. School & Society, 25 , 549–552.

Price, L., & Kirkwood, A. (2014). Using technology for teaching and learning in higher education: A critical review of the role of evidence in informing practice. Higher Education Research & Development, 33 , 549–564.

Rawson, K. A., & Dunlosky, J. (2011). Optimizing schedules of retrieval practice for durable and efficient learning: How much is enough? Journal of Experimental Psychology: General, 140 , 283–302.

Roediger III, H. L., & Butler, A. C. (2011). The critical role of retrieval practice in long-term retention. Trends in Cognitive Sciences, 15 , 20–27.

Roediger III, H. L., & Karpicke, J. D. (2006). Test-enhanced learning: Taking memory tests improves long-term retention. Psychological Science, 17 , 249–255.

Rohrer, D. (2015). Student instruction should be distributed over long time periods. Educational Psychology Review, 27 , 635–643.

Rosen, Y., & Salomon, G. (2007). The differential learning achievements of constructivist technology-intensive learning environments as compared with traditional ones: A meta-analysis. Journal of Educational Computing Research, 36 , 1–14.

Schacter, J., & Fagnano, C. (1999). Does computer technology improve student learning and achievement? How, when, and under what conditions? Journal of Educational Computing Research, 20 , 329–343.

* Schoenfeld-Tacher, R., McConnell, S., & Graham, M. (2001). Do no harm—A comparison of the effects of on-line vs. traditional delivery media on a science course. Journal of Science Education & Technology, 10, 257-265.

* Shadiev, R., Hwang, W-Y., & Liu, T-Y. (2018). Investigating the effectiveness of a learning activity supported by a mobile multimedia learning system to enhance autonomous EFL learning in authentic contexts. Educational Technology Research & Development, 66, 893-912.

* Siciliano, P. C., Jenks, M. A., Dana, M. N., & Talbert, B. A. (2011). The impact of audio technology on undergraduate instruction in a study abroad course on English gardens. NACTA Journal, 55, 46-53.

Skinner, B. F. (1958). Teaching machines. Science, 128 , 969–977.

* Spichtig, A. N., Gehsmann, K. M., Pascoe, J. P., & Ferrara, J. D. (2019). The impact of adaptive, web-based, scaffolded silent reading instruction on the reading achievement of students in grades 4 and 5. The Elementary School Journal, 119, 443-467.

* Steinweg, S. B., Davis, M. L., & Thomson, W. S. (2005). A comparison of traditional and online instruction in an introduction to special education course. Teacher Education & Special Education, 28, 62-73.

* Su, C.-H., & Cheng, C.-H. (2014). A mobile gamification learning system for improving the learning motivation and achievements. Journal of Computer Assisted Learning, 31, 268-286.

Swenson, P. W., & Evans, M. (2003). Hybrid courses as learning communities. In S. Reisman (Ed.), Electronic learning communities issues and practices (pp. 27–72). Greenwich, CT: Information Age Publishing.

Thalheimer, W., & Cook, S. (2019). How to calculate effect sizes from published research articles: A simplified methodology. Retrieved September 3, 2019 from http://work-learning.com/effect_sizes.htm .

* Tilidetzke, R. (1992). A comparison of CAI and traditional instruction in a college algebra course. Journal of Computers in Mathematics & Science Teaching, 11, 53-62.

* Turan, Z., Meral, E., & Sahin, I. F. (2018). The impact of mobile augmented reality in geography education: Achievements, cognitive loads and views of university students. Journal of Geography in Higher Education, 42, 427-441.

* Verdugo, D. R., & Belmonte, I. A. (2007). Using digital stories to improve listening comprehension with Spanish young learners of English. Language Learning & Technology, 11, 87-101.

* Vichitvejpaisal, P., Sitthikongsak, S., Preechakoon, B., Kraiprasit, K., Parakkamodom, S., Manon, C., & Petcharatana, S. (2001). Does computer-assisted instruction really help to improve the learning process? Medical Education, 35, 983-989.

* Wang, S., & Sleeman, P. J. (1993). A comparison of the relative effectiveness of computer-assisted instruction and conventional methods for teaching an operations management course in a school of business. International Journal of Instructional Media, 20, 225-234.

* Wiebe, J. H., & Martin, N. J. (1994). The impact of a computer-based adventure game on achievement and attitudes in geography. Journal of Computing in Childhood Education, 5, 61-71.

* Wiesner, T. F., & Lan, W. (2004). Comparison of student learning in physical and simulated unit operations experiments. Journal of Engineering Education, 93, 195-204.

* William, A., Vidal, V. L., & John, P. (2016). Traditional instruction versus virtual reality simulation: A comparative study of phlebotomy training among nursing students in Kuwait. Journal of Education & Practice, 7, 18-25.

* Wu, T-T. (2018). Improving the effectiveness of English vocabulary review by integrating ARCS with mobile game-based learning. Journal of Computer Assisted Learning, 34, 315-323.

* Yarahmadzehi, N., & Goodarzi, M. (2020). Investigating the role of formative mobile based assessment in vocabulary learning of pre-intermediate EFL learners in comparison with paper based assessment. Turkish Online Journal of Distance Education, 21, 181-196.

* Yildirim, Z., Ozden, M. Y., & Aksu, M. (2001). Comparison of hypermedia learning and traditional instruction on knowledge acquisition and retention. The Journal of Educational Research, 94, 207-214.

* Zaini, A., & Mazdayasna, G. (2015). The impact of computer-based instruction on the development of EFL learners’ writing skills. Journal of Computer Assisted Learning, 31, 516-528.

* Zubas, P., Heiss, C., & Pedersen, M. (2006). Comparing the effectiveness of a supplemental online tutorial to traditional instruction with nutritional science students. Journal of Interactive Online Learning, 5, 75-81.

Download references

Author information

Authors and affiliations.

Department of Psychology, Iowa State University, W112 Lagomarcino Hall, 901 Stange Road, Ames, IA, 50011, USA

Kam Leung Yeung & Shana K. Carpenter

Department of Psychology, Syracuse University, Syracuse, NY, USA

Daniel Corral

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Shana K. Carpenter .

Ethics declarations

Conflict of interest.

Shana Carpenter has received grants from the National Science Foundation (DUE 1504480) and the James S. McDonnell Foundation (220020483).

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This material is based upon work supported by the James S. McDonnell Foundation 21 st Century Science Initiative in Understanding Human Cognition, Collaborative Grant No. 220020483. We thank Sierra Lauber, Luke Huber, and Kyle St. Hilaire for their help in locating articles.

Rights and permissions

Reprints and permissions

About this article

Yeung, K.L., Carpenter, S.K. & Corral, D. A Comprehensive Review of Educational Technology on Objective Learning Outcomes in Academic Contexts. Educ Psychol Rev 33 , 1583–1630 (2021). https://doi.org/10.1007/s10648-020-09592-4

Download citation

Accepted : 28 December 2020

Published : 05 April 2021

Issue Date : December 2021

DOI : https://doi.org/10.1007/s10648-020-09592-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Cognitive Science
  • Effective Learning Principles
  • Find a journal
  • Publish with us
  • Track your research

REALIZING THE PROMISE:

Leading up to the 75th anniversary of the UN General Assembly, this “Realizing the promise: How can education technology improve learning for all?” publication kicks off the Center for Universal Education’s first playbook in a series to help improve education around the world.

It is intended as an evidence-based tool for ministries of education, particularly in low- and middle-income countries, to adopt and more successfully invest in education technology.

While there is no single education initiative that will achieve the same results everywhere—as school systems differ in learners and educators, as well as in the availability and quality of materials and technologies—an important first step is understanding how technology is used given specific local contexts and needs.

The surveys in this playbook are designed to be adapted to collect this information from educators, learners, and school leaders and guide decisionmakers in expanding the use of technology.  

Introduction

While technology has disrupted most sectors of the economy and changed how we communicate, access information, work, and even play, its impact on schools, teaching, and learning has been much more limited. We believe that this limited impact is primarily due to technology being been used to replace analog tools, without much consideration given to playing to technology’s comparative advantages. These comparative advantages, relative to traditional “chalk-and-talk” classroom instruction, include helping to scale up standardized instruction, facilitate differentiated instruction, expand opportunities for practice, and increase student engagement. When schools use technology to enhance the work of educators and to improve the quality and quantity of educational content, learners will thrive.

Further, COVID-19 has laid bare that, in today’s environment where pandemics and the effects of climate change are likely to occur, schools cannot always provide in-person education—making the case for investing in education technology.

Here we argue for a simple yet surprisingly rare approach to education technology that seeks to:

  • Understand the needs, infrastructure, and capacity of a school system—the diagnosis;
  • Survey the best available evidence on interventions that match those conditions—the evidence; and
  • Closely monitor the results of innovations before they are scaled up—the prognosis.

RELATED CONTENT

conclusion about technology in education

Podcast: How education technology can improve learning for all students

conclusion about technology in education

To make ed tech work, set clear goals, review the evidence, and pilot before you scale

The framework.

Our approach builds on a simple yet intuitive theoretical framework created two decades ago by two of the most prominent education researchers in the United States, David K. Cohen and Deborah Loewenberg Ball. They argue that what matters most to improve learning is the interactions among educators and learners around educational materials. We believe that the failed school-improvement efforts in the U.S. that motivated Cohen and Ball’s framework resemble the ed-tech reforms in much of the developing world to date in the lack of clarity improving the interactions between educators, learners, and the educational material. We build on their framework by adding parents as key agents that mediate the relationships between learners and educators and the material (Figure 1).

Figure 1: The instructional core

Adapted from Cohen and Ball (1999)

As the figure above suggests, ed-tech interventions can affect the instructional core in a myriad of ways. Yet, just because technology can do something, it does not mean it should. School systems in developing countries differ along many dimensions and each system is likely to have different needs for ed-tech interventions, as well as different infrastructure and capacity to enact such interventions.

The diagnosis:

How can school systems assess their needs and preparedness.

A useful first step for any school system to determine whether it should invest in education technology is to diagnose its:

  • Specific needs to improve student learning (e.g., raising the average level of achievement, remediating gaps among low performers, and challenging high performers to develop higher-order skills);
  • Infrastructure to adopt technology-enabled solutions (e.g., electricity connection, availability of space and outlets, stock of computers, and Internet connectivity at school and at learners’ homes); and
  • Capacity to integrate technology in the instructional process (e.g., learners’ and educators’ level of familiarity and comfort with hardware and software, their beliefs about the level of usefulness of technology for learning purposes, and their current uses of such technology).

Before engaging in any new data collection exercise, school systems should take full advantage of existing administrative data that could shed light on these three main questions. This could be in the form of internal evaluations but also international learner assessments, such as the Program for International Student Assessment (PISA), the Trends in International Mathematics and Science Study (TIMSS), and/or the Progress in International Literacy Study (PIRLS), and the Teaching and Learning International Study (TALIS). But if school systems lack information on their preparedness for ed-tech reforms or if they seek to complement existing data with a richer set of indicators, we developed a set of surveys for learners, educators, and school leaders. Download the full report to see how we map out the main aspects covered by these surveys, in hopes of highlighting how they could be used to inform decisions around the adoption of ed-tech interventions.

The evidence:

How can school systems identify promising ed-tech interventions.

There is no single “ed-tech” initiative that will achieve the same results everywhere, simply because school systems differ in learners and educators, as well as in the availability and quality of materials and technologies. Instead, to realize the potential of education technology to accelerate student learning, decisionmakers should focus on four potential uses of technology that play to its comparative advantages and complement the work of educators to accelerate student learning (Figure 2). These comparative advantages include:

  • Scaling up quality instruction, such as through prerecorded quality lessons.
  • Facilitating differentiated instruction, through, for example, computer-adaptive learning and live one-on-one tutoring.
  • Expanding opportunities to practice.
  • Increasing learner engagement through videos and games.

Figure 2: Comparative advantages of technology

Here we review the evidence on ed-tech interventions from 37 studies in 20 countries*, organizing them by comparative advantage. It’s important to note that ours is not the only way to classify these interventions (e.g., video tutorials could be considered as a strategy to scale up instruction or increase learner engagement), but we believe it may be useful to highlight the needs that they could address and why technology is well positioned to do so.

When discussing specific studies, we report the magnitude of the effects of interventions using standard deviations (SDs). SDs are a widely used metric in research to express the effect of a program or policy with respect to a business-as-usual condition (e.g., test scores). There are several ways to make sense of them. One is to categorize the magnitude of the effects based on the results of impact evaluations. In developing countries, effects below 0.1 SDs are considered to be small, effects between 0.1 and 0.2 SDs are medium, and those above 0.2 SDs are large (for reviews that estimate the average effect of groups of interventions, called “meta analyses,” see e.g., Conn, 2017; Kremer, Brannen, & Glennerster, 2013; McEwan, 2014; Snilstveit et al., 2015; Evans & Yuan, 2020.)

*In surveying the evidence, we began by compiling studies from prior general and ed-tech specific evidence reviews that some of us have written and from ed-tech reviews conducted by others. Then, we tracked the studies cited by the ones we had previously read and reviewed those, as well. In identifying studies for inclusion, we focused on experimental and quasi-experimental evaluations of education technology interventions from pre-school to secondary school in low- and middle-income countries that were released between 2000 and 2020. We only included interventions that sought to improve student learning directly (i.e., students’ interaction with the material), as opposed to interventions that have impacted achievement indirectly, by reducing teacher absence or increasing parental engagement. This process yielded 37 studies in 20 countries (see the full list of studies in Appendix B).

Scaling up standardized instruction

One of the ways in which technology may improve the quality of education is through its capacity to deliver standardized quality content at scale. This feature of technology may be particularly useful in three types of settings: (a) those in “hard-to-staff” schools (i.e., schools that struggle to recruit educators with the requisite training and experience—typically, in rural and/or remote areas) (see, e.g., Urquiola & Vegas, 2005); (b) those in which many educators are frequently absent from school (e.g., Chaudhury, Hammer, Kremer, Muralidharan, & Rogers, 2006; Muralidharan, Das, Holla, & Mohpal, 2017); and/or (c) those in which educators have low levels of pedagogical and subject matter expertise (e.g., Bietenbeck, Piopiunik, & Wiederhold, 2018; Bold et al., 2017; Metzler & Woessmann, 2012; Santibañez, 2006) and do not have opportunities to observe and receive feedback (e.g., Bruns, Costa, & Cunha, 2018; Cilliers, Fleisch, Prinsloo, & Taylor, 2018). Technology could address this problem by: (a) disseminating lessons delivered by qualified educators to a large number of learners (e.g., through prerecorded or live lessons); (b) enabling distance education (e.g., for learners in remote areas and/or during periods of school closures); and (c) distributing hardware preloaded with educational materials.

Prerecorded lessons

Technology seems to be well placed to amplify the impact of effective educators by disseminating their lessons. Evidence on the impact of prerecorded lessons is encouraging, but not conclusive. Some initiatives that have used short instructional videos to complement regular instruction, in conjunction with other learning materials, have raised student learning on independent assessments. For example, Beg et al. (2020) evaluated an initiative in Punjab, Pakistan in which grade 8 classrooms received an intervention that included short videos to substitute live instruction, quizzes for learners to practice the material from every lesson, tablets for educators to learn the material and follow the lesson, and LED screens to project the videos onto a classroom screen. After six months, the intervention improved the performance of learners on independent tests of math and science by 0.19 and 0.24 SDs, respectively but had no discernible effect on the math and science section of Punjab’s high-stakes exams.

One study suggests that approaches that are far less technologically sophisticated can also improve learning outcomes—especially, if the business-as-usual instruction is of low quality. For example, Naslund-Hadley, Parker, and Hernandez-Agramonte (2014) evaluated a preschool math program in Cordillera, Paraguay that used audio segments and written materials four days per week for an hour per day during the school day. After five months, the intervention improved math scores by 0.16 SDs, narrowing gaps between low- and high-achieving learners, and between those with and without educators with formal training in early childhood education.

Yet, the integration of prerecorded material into regular instruction has not always been successful. For example, de Barros (2020) evaluated an intervention that combined instructional videos for math and science with infrastructure upgrades (e.g., two “smart” classrooms, two TVs, and two tablets), printed workbooks for students, and in-service training for educators of learners in grades 9 and 10 in Haryana, India (all materials were mapped onto the official curriculum). After 11 months, the intervention negatively impacted math achievement (by 0.08 SDs) and had no effect on science (with respect to business as usual classes). It reduced the share of lesson time that educators devoted to instruction and negatively impacted an index of instructional quality. Likewise, Seo (2017) evaluated several combinations of infrastructure (solar lights and TVs) and prerecorded videos (in English and/or bilingual) for grade 11 students in northern Tanzania and found that none of the variants improved student learning, even when the videos were used. The study reports effects from the infrastructure component across variants, but as others have noted (Muralidharan, Romero, & Wüthrich, 2019), this approach to estimating impact is problematic.

A very similar intervention delivered after school hours, however, had sizeable effects on learners’ basic skills. Chiplunkar, Dhar, and Nagesh (2020) evaluated an initiative in Chennai (the capital city of the state of Tamil Nadu, India) delivered by the same organization as above that combined short videos that explained key concepts in math and science with worksheets, facilitator-led instruction, small groups for peer-to-peer learning, and occasional career counseling and guidance for grade 9 students. These lessons took place after school for one hour, five times a week. After 10 months, it had large effects on learners’ achievement as measured by tests of basic skills in math and reading, but no effect on a standardized high-stakes test in grade 10 or socio-emotional skills (e.g., teamwork, decisionmaking, and communication).

Drawing general lessons from this body of research is challenging for at least two reasons. First, all of the studies above have evaluated the impact of prerecorded lessons combined with several other components (e.g., hardware, print materials, or other activities). Therefore, it is possible that the effects found are due to these additional components, rather than to the recordings themselves, or to the interaction between the two (see Muralidharan, 2017 for a discussion of the challenges of interpreting “bundled” interventions). Second, while these studies evaluate some type of prerecorded lessons, none examines the content of such lessons. Thus, it seems entirely plausible that the direction and magnitude of the effects depends largely on the quality of the recordings (e.g., the expertise of the educator recording it, the amount of preparation that went into planning the recording, and its alignment with best teaching practices).

These studies also raise three important questions worth exploring in future research. One of them is why none of the interventions discussed above had effects on high-stakes exams, even if their materials are typically mapped onto the official curriculum. It is possible that the official curricula are simply too challenging for learners in these settings, who are several grade levels behind expectations and who often need to reinforce basic skills (see Pritchett & Beatty, 2015). Another question is whether these interventions have long-term effects on teaching practices. It seems plausible that, if these interventions are deployed in contexts with low teaching quality, educators may learn something from watching the videos or listening to the recordings with learners. Yet another question is whether these interventions make it easier for schools to deliver instruction to learners whose native language is other than the official medium of instruction.

Distance education

Technology can also allow learners living in remote areas to access education. The evidence on these initiatives is encouraging. For example, Johnston and Ksoll (2017) evaluated a program that broadcasted live instruction via satellite to rural primary school students in the Volta and Greater Accra regions of Ghana. For this purpose, the program also equipped classrooms with the technology needed to connect to a studio in Accra, including solar panels, a satellite modem, a projector, a webcam, microphones, and a computer with interactive software. After two years, the intervention improved the numeracy scores of students in grades 2 through 4, and some foundational literacy tasks, but it had no effect on attendance or classroom time devoted to instruction, as captured by school visits. The authors interpreted these results as suggesting that the gains in achievement may be due to improving the quality of instruction that children received (as opposed to increased instructional time). Naik, Chitre, Bhalla, and Rajan (2019) evaluated a similar program in the Indian state of Karnataka and also found positive effects on learning outcomes, but it is not clear whether those effects are due to the program or due to differences in the groups of students they compared to estimate the impact of the initiative.

In one context (Mexico), this type of distance education had positive long-term effects. Navarro-Sola (2019) took advantage of the staggered rollout of the telesecundarias (i.e., middle schools with lessons broadcasted through satellite TV) in 1968 to estimate its impact. The policy had short-term effects on students’ enrollment in school: For every telesecundaria per 50 children, 10 students enrolled in middle school and two pursued further education. It also had a long-term influence on the educational and employment trajectory of its graduates. Each additional year of education induced by the policy increased average income by nearly 18 percent. This effect was attributable to more graduates entering the labor force and shifting from agriculture and the informal sector. Similarly, Fabregas (2019) leveraged a later expansion of this policy in 1993 and found that each additional telesecundaria per 1,000 adolescents led to an average increase of 0.2 years of education, and a decline in fertility for women, but no conclusive evidence of long-term effects on labor market outcomes.

It is crucial to interpret these results keeping in mind the settings where the interventions were implemented. As we mention above, part of the reason why they have proven effective is that the “counterfactual” conditions for learning (i.e., what would have happened to learners in the absence of such programs) was either to not have access to schooling or to be exposed to low-quality instruction. School systems interested in taking up similar interventions should assess the extent to which their learners (or parts of their learner population) find themselves in similar conditions to the subjects of the studies above. This illustrates the importance of assessing the needs of a system before reviewing the evidence.

Preloaded hardware

Technology also seems well positioned to disseminate educational materials. Specifically, hardware (e.g., desktop computers, laptops, or tablets) could also help deliver educational software (e.g., word processing, reference texts, and/or games). In theory, these materials could not only undergo a quality assurance review (e.g., by curriculum specialists and educators), but also draw on the interactions with learners for adjustments (e.g., identifying areas needing reinforcement) and enable interactions between learners and educators.

In practice, however, most initiatives that have provided learners with free computers, laptops, and netbooks do not leverage any of the opportunities mentioned above. Instead, they install a standard set of educational materials and hope that learners find them helpful enough to take them up on their own. Students rarely do so, and instead use the laptops for recreational purposes—often, to the detriment of their learning (see, e.g., Malamud & Pop-Eleches, 2011). In fact, free netbook initiatives have not only consistently failed to improve academic achievement in math or language (e.g., Cristia et al., 2017), but they have had no impact on learners’ general computer skills (e.g., Beuermann et al., 2015). Some of these initiatives have had small impacts on cognitive skills, but the mechanisms through which those effects occurred remains unclear.

To our knowledge, the only successful deployment of a free laptop initiative was one in which a team of researchers equipped the computers with remedial software. Mo et al. (2013) evaluated a version of the One Laptop per Child (OLPC) program for grade 3 students in migrant schools in Beijing, China in which the laptops were loaded with a remedial software mapped onto the national curriculum for math (similar to the software products that we discuss under “practice exercises” below). After nine months, the program improved math achievement by 0.17 SDs and computer skills by 0.33 SDs. If a school system decides to invest in free laptops, this study suggests that the quality of the software on the laptops is crucial.

To date, however, the evidence suggests that children do not learn more from interacting with laptops than they do from textbooks. For example, Bando, Gallego, Gertler, and Romero (2016) compared the effect of free laptop and textbook provision in 271 elementary schools in disadvantaged areas of Honduras. After seven months, students in grades 3 and 6 who had received the laptops performed on par with those who had received the textbooks in math and language. Further, even if textbooks essentially become obsolete at the end of each school year, whereas laptops can be reloaded with new materials for each year, the costs of laptop provision (not just the hardware, but also the technical assistance, Internet, and training associated with it) are not yet low enough to make them a more cost-effective way of delivering content to learners.

Evidence on the provision of tablets equipped with software is encouraging but limited. For example, de Hoop et al. (2020) evaluated a composite intervention for first grade students in Zambia’s Eastern Province that combined infrastructure (electricity via solar power), hardware (projectors and tablets), and educational materials (lesson plans for educators and interactive lessons for learners, both loaded onto the tablets and mapped onto the official Zambian curriculum). After 14 months, the intervention had improved student early-grade reading by 0.4 SDs, oral vocabulary scores by 0.25 SDs, and early-grade math by 0.22 SDs. It also improved students’ achievement by 0.16 on a locally developed assessment. The multifaceted nature of the program, however, makes it challenging to identify the components that are driving the positive effects. Pitchford (2015) evaluated an intervention that provided tablets equipped with educational “apps,” to be used for 30 minutes per day for two months to develop early math skills among students in grades 1 through 3 in Lilongwe, Malawi. The evaluation found positive impacts in math achievement, but the main study limitation is that it was conducted in a single school.

Facilitating differentiated instruction

Another way in which technology may improve educational outcomes is by facilitating the delivery of differentiated or individualized instruction. Most developing countries massively expanded access to schooling in recent decades by building new schools and making education more affordable, both by defraying direct costs, as well as compensating for opportunity costs (Duflo, 2001; World Bank, 2018). These initiatives have not only rapidly increased the number of learners enrolled in school, but have also increased the variability in learner’ preparation for schooling. Consequently, a large number of learners perform well below grade-based curricular expectations (see, e.g., Duflo, Dupas, & Kremer, 2011; Pritchett & Beatty, 2015). These learners are unlikely to get much from “one-size-fits-all” instruction, in which a single educator delivers instruction deemed appropriate for the middle (or top) of the achievement distribution (Banerjee & Duflo, 2011). Technology could potentially help these learners by providing them with: (a) instruction and opportunities for practice that adjust to the level and pace of preparation of each individual (known as “computer-adaptive learning” (CAL)); or (b) live, one-on-one tutoring.

Computer-adaptive learning

One of the main comparative advantages of technology is its ability to diagnose students’ initial learning levels and assign students to instruction and exercises of appropriate difficulty. No individual educator—no matter how talented—can be expected to provide individualized instruction to all learners in his/her class simultaneously . In this respect, technology is uniquely positioned to complement traditional teaching. This use of technology could help learners master basic skills and help them get more out of schooling.

Although many software products evaluated in recent years have been categorized as CAL, many rely on a relatively coarse level of differentiation at an initial stage (e.g., a diagnostic test) without further differentiation. We discuss these initiatives under the category of “increasing opportunities for practice” below. CAL initiatives complement an initial diagnostic with dynamic adaptation (i.e., at each response or set of responses from learners) to adjust both the initial level of difficulty and rate at which it increases or decreases, depending on whether learners’ responses are correct or incorrect.

Existing evidence on this specific type of programs is highly promising. Most famously, Banerjee et al. (2007) evaluated CAL software in Vadodara, in the Indian state of Gujarat, in which grade 4 students were offered two hours of shared computer time per week before and after school, during which they played games that involved solving math problems. The level of difficulty of such problems adjusted based on students’ answers. This program improved math achievement by 0.35 and 0.47 SDs after one and two years of implementation, respectively. Consistent with the promise of personalized learning, the software improved achievement for all students. In fact, one year after the end of the program, students assigned to the program still performed 0.1 SDs better than those assigned to a business as usual condition. More recently, Muralidharan, et al. (2019) evaluated a “blended learning” initiative in which students in grades 4 through 9 in Delhi, India received 45 minutes of interaction with CAL software for math and language, and 45 minutes of small group instruction before or after going to school. After only 4.5 months, the program improved achievement by 0.37 SDs in math and 0.23 SDs in Hindi. While all learners benefited from the program in absolute terms, the lowest performing learners benefited the most in relative terms, since they were learning very little in school.

We see two important limitations from this body of research. First, to our knowledge, none of these initiatives has been evaluated when implemented during the school day. Therefore, it is not possible to distinguish the effect of the adaptive software from that of additional instructional time. Second, given that most of these programs were facilitated by local instructors, attempts to distinguish the effect of the software from that of the instructors has been mostly based on noncausal evidence. A frontier challenge in this body of research is to understand whether CAL software can increase the effectiveness of school-based instruction by substituting part of the regularly scheduled time for math and language instruction.

Live one-on-one tutoring

Recent improvements in the speed and quality of videoconferencing, as well as in the connectivity of remote areas, have enabled yet another way in which technology can help personalization: live (i.e., real-time) one-on-one tutoring. While the evidence on in-person tutoring is scarce in developing countries, existing studies suggest that this approach works best when it is used to personalize instruction (see, e.g., Banerjee et al., 2007; Banerji, Berry, & Shotland, 2015; Cabezas, Cuesta, & Gallego, 2011).

There are almost no studies on the impact of online tutoring—possibly, due to the lack of hardware and Internet connectivity in low- and middle-income countries. One exception is Chemin and Oledan (2020)’s recent evaluation of an online tutoring program for grade 6 students in Kianyaga, Kenya to learn English from volunteers from a Canadian university via Skype ( videoconferencing software) for one hour per week after school. After 10 months, program beneficiaries performed 0.22 SDs better in a test of oral comprehension, improved their comfort using technology for learning, and became more willing to engage in cross-cultural communication. Importantly, while the tutoring sessions used the official English textbooks and sought in part to help learners with their homework, tutors were trained on several strategies to teach to each learner’s individual level of preparation, focusing on basic skills if necessary. To our knowledge, similar initiatives within a country have not yet been rigorously evaluated.

Expanding opportunities for practice

A third way in which technology may improve the quality of education is by providing learners with additional opportunities for practice. In many developing countries, lesson time is primarily devoted to lectures, in which the educator explains the topic and the learners passively copy explanations from the blackboard. This setup leaves little time for in-class practice. Consequently, learners who did not understand the explanation of the material during lecture struggle when they have to solve homework assignments on their own. Technology could potentially address this problem by allowing learners to review topics at their own pace.

Practice exercises

Technology can help learners get more out of traditional instruction by providing them with opportunities to implement what they learn in class. This approach could, in theory, allow some learners to anchor their understanding of the material through trial and error (i.e., by realizing what they may not have understood correctly during lecture and by getting better acquainted with special cases not covered in-depth in class).

Existing evidence on practice exercises reflects both the promise and the limitations of this use of technology in developing countries. For example, Lai et al. (2013) evaluated a program in Shaanxi, China where students in grades 3 and 5 were required to attend two 40-minute remedial sessions per week in which they first watched videos that reviewed the material that had been introduced in their math lessons that week and then played games to practice the skills introduced in the video. After four months, the intervention improved math achievement by 0.12 SDs. Many other evaluations of comparable interventions have found similar small-to-moderate results (see, e.g., Lai, Luo, Zhang, Huang, & Rozelle, 2015; Lai et al., 2012; Mo et al., 2015; Pitchford, 2015). These effects, however, have been consistently smaller than those of initiatives that adjust the difficulty of the material based on students’ performance (e.g., Banerjee et al., 2007; Muralidharan, et al., 2019). We hypothesize that these programs do little for learners who perform several grade levels behind curricular expectations, and who would benefit more from a review of foundational concepts from earlier grades.

We see two important limitations from this research. First, most initiatives that have been evaluated thus far combine instructional videos with practice exercises, so it is hard to know whether their effects are driven by the former or the latter. In fact, the program in China described above allowed learners to ask their peers whenever they did not understand a difficult concept, so it potentially also captured the effect of peer-to-peer collaboration. To our knowledge, no studies have addressed this gap in the evidence.

Second, most of these programs are implemented before or after school, so we cannot distinguish the effect of additional instructional time from that of the actual opportunity for practice. The importance of this question was first highlighted by Linden (2008), who compared two delivery mechanisms for game-based remedial math software for students in grades 2 and 3 in a network of schools run by a nonprofit organization in Gujarat, India: one in which students interacted with the software during the school day and another one in which students interacted with the software before or after school (in both cases, for three hours per day). After a year, the first version of the program had negatively impacted students’ math achievement by 0.57 SDs and the second one had a null effect. This study suggested that computer-assisted learning is a poor substitute for regular instruction when it is of high quality, as was the case in this well-functioning private network of schools.

In recent years, several studies have sought to remedy this shortcoming. Mo et al. (2014) were among the first to evaluate practice exercises delivered during the school day. They evaluated an initiative in Shaanxi, China in which students in grades 3 and 5 were required to interact with the software similar to the one in Lai et al. (2013) for two 40-minute sessions per week. The main limitation of this study, however, is that the program was delivered during regularly scheduled computer lessons, so it could not determine the impact of substituting regular math instruction. Similarly, Mo et al. (2020) evaluated a self-paced and a teacher-directed version of a similar program for English for grade 5 students in Qinghai, China. Yet, the key shortcoming of this study is that the teacher-directed version added several components that may also influence achievement, such as increased opportunities for teachers to provide students with personalized assistance when they struggled with the material. Ma, Fairlie, Loyalka, and Rozelle (2020) compared the effectiveness of additional time-delivered remedial instruction for students in grades 4 to 6 in Shaanxi, China through either computer-assisted software or using workbooks. This study indicates whether additional instructional time is more effective when using technology, but it does not address the question of whether school systems may improve the productivity of instructional time during the school day by substituting educator-led with computer-assisted instruction.

Increasing learner engagement

Another way in which technology may improve education is by increasing learners’ engagement with the material. In many school systems, regular “chalk and talk” instruction prioritizes time for educators’ exposition over opportunities for learners to ask clarifying questions and/or contribute to class discussions. This, combined with the fact that many developing-country classrooms include a very large number of learners (see, e.g., Angrist & Lavy, 1999; Duflo, Dupas, & Kremer, 2015), may partially explain why the majority of those students are several grade levels behind curricular expectations (e.g., Muralidharan, et al., 2019; Muralidharan & Zieleniak, 2014; Pritchett & Beatty, 2015). Technology could potentially address these challenges by: (a) using video tutorials for self-paced learning and (b) presenting exercises as games and/or gamifying practice.

Video tutorials

Technology can potentially increase learner effort and understanding of the material by finding new and more engaging ways to deliver it. Video tutorials designed for self-paced learning—as opposed to videos for whole class instruction, which we discuss under the category of “prerecorded lessons” above—can increase learner effort in multiple ways, including: allowing learners to focus on topics with which they need more help, letting them correct errors and misconceptions on their own, and making the material appealing through visual aids. They can increase understanding by breaking the material into smaller units and tackling common misconceptions.

In spite of the popularity of instructional videos, there is relatively little evidence on their effectiveness. Yet, two recent evaluations of different versions of the Khan Academy portal, which mainly relies on instructional videos, offer some insight into their impact. First, Ferman, Finamor, and Lima (2019) evaluated an initiative in 157 public primary and middle schools in five cities in Brazil in which the teachers of students in grades 5 and 9 were taken to the computer lab to learn math from the platform for 50 minutes per week. The authors found that, while the intervention slightly improved learners’ attitudes toward math, these changes did not translate into better performance in this subject. The authors hypothesized that this could be due to the reduction of teacher-led math instruction.

More recently, Büchel, Jakob, Kühnhanss, Steffen, and Brunetti (2020) evaluated an after-school, offline delivery of the Khan Academy portal in grades 3 through 6 in 302 primary schools in Morazán, El Salvador. Students in this study received 90 minutes per week of additional math instruction (effectively nearly doubling total math instruction per week) through teacher-led regular lessons, teacher-assisted Khan Academy lessons, or similar lessons assisted by technical supervisors with no content expertise. (Importantly, the first group provided differentiated instruction, which is not the norm in Salvadorian schools). All three groups outperformed both schools without any additional lessons and classrooms without additional lessons in the same schools as the program. The teacher-assisted Khan Academy lessons performed 0.24 SDs better, the supervisor-led lessons 0.22 SDs better, and the teacher-led regular lessons 0.15 SDs better, but the authors could not determine whether the effects across versions were different.

Together, these studies suggest that instructional videos work best when provided as a complement to, rather than as a substitute for, regular instruction. Yet, the main limitation of these studies is the multifaceted nature of the Khan Academy portal, which also includes other components found to positively improve learner achievement, such as differentiated instruction by students’ learning levels. While the software does not provide the type of personalization discussed above, learners are asked to take a placement test and, based on their score, educators assign them different work. Therefore, it is not clear from these studies whether the effects from Khan Academy are driven by its instructional videos or to the software’s ability to provide differentiated activities when combined with placement tests.

Games and gamification

Technology can also increase learner engagement by presenting exercises as games and/or by encouraging learner to play and compete with others (e.g., using leaderboards and rewards)—an approach known as “gamification.” Both approaches can increase learner motivation and effort by presenting learners with entertaining opportunities for practice and by leveraging peers as commitment devices.

There are very few studies on the effects of games and gamification in low- and middle-income countries. Recently, Araya, Arias Ortiz, Bottan, and Cristia (2019) evaluated an initiative in which grade 4 students in Santiago, Chile were required to participate in two 90-minute sessions per week during the school day with instructional math software featuring individual and group competitions (e.g., tracking each learner’s standing in his/her class and tournaments between sections). After nine months, the program led to improvements of 0.27 SDs in the national student assessment in math (it had no spillover effects on reading). However, it had mixed effects on non-academic outcomes. Specifically, the program increased learners’ willingness to use computers to learn math, but, at the same time, increased their anxiety toward math and negatively impacted learners’ willingness to collaborate with peers. Finally, given that one of the weekly sessions replaced regular math instruction and the other one represented additional math instructional time, it is not clear whether the academic effects of the program are driven by the software or the additional time devoted to learning math.

The prognosis:

How can school systems adopt interventions that match their needs.

Here are five specific and sequential guidelines for decisionmakers to realize the potential of education technology to accelerate student learning.

1. Take stock of how your current schools, educators, and learners are engaging with technology .

Carry out a short in-school survey to understand the current practices and potential barriers to adoption of technology (we have included suggested survey instruments in the Appendices); use this information in your decisionmaking process. For example, we learned from conversations with current and former ministers of education from various developing regions that a common limitation to technology use is regulations that hold school leaders accountable for damages to or losses of devices. Another common barrier is lack of access to electricity and Internet, or even the availability of sufficient outlets for charging devices in classrooms. Understanding basic infrastructure and regulatory limitations to the use of education technology is a first necessary step. But addressing these limitations will not guarantee that introducing or expanding technology use will accelerate learning. The next steps are thus necessary.

“In Africa, the biggest limit is connectivity. Fiber is expensive, and we don’t have it everywhere. The continent is creating a digital divide between cities, where there is fiber, and the rural areas.  The [Ghanaian] administration put in schools offline/online technologies with books, assessment tools, and open source materials. In deploying this, we are finding that again, teachers are unfamiliar with it. And existing policies prohibit students to bring their own tablets or cell phones. The easiest way to do it would have been to let everyone bring their own device. But policies are against it.” H.E. Matthew Prempeh, Minister of Education of Ghana, on the need to understand the local context.

2. Consider how the introduction of technology may affect the interactions among learners, educators, and content .

Our review of the evidence indicates that technology may accelerate student learning when it is used to scale up access to quality content, facilitate differentiated instruction, increase opportunities for practice, or when it increases learner engagement. For example, will adding electronic whiteboards to classrooms facilitate access to more quality content or differentiated instruction? Or will these expensive boards be used in the same way as the old chalkboards? Will providing one device (laptop or tablet) to each learner facilitate access to more and better content, or offer students more opportunities to practice and learn? Solely introducing technology in classrooms without additional changes is unlikely to lead to improved learning and may be quite costly. If you cannot clearly identify how the interactions among the three key components of the instructional core (educators, learners, and content) may change after the introduction of technology, then it is probably not a good idea to make the investment. See Appendix A for guidance on the types of questions to ask.

3. Once decisionmakers have a clear idea of how education technology can help accelerate student learning in a specific context, it is important to define clear objectives and goals and establish ways to regularly assess progress and make course corrections in a timely manner .

For instance, is the education technology expected to ensure that learners in early grades excel in foundational skills—basic literacy and numeracy—by age 10? If so, will the technology provide quality reading and math materials, ample opportunities to practice, and engaging materials such as videos or games? Will educators be empowered to use these materials in new ways? And how will progress be measured and adjusted?

4. How this kind of reform is approached can matter immensely for its success.

It is easy to nod to issues of “implementation,” but that needs to be more than rhetorical. Keep in mind that good use of education technology requires thinking about how it will affect learners, educators, and parents. After all, giving learners digital devices will make no difference if they get broken, are stolen, or go unused. Classroom technologies only matter if educators feel comfortable putting them to work. Since good technology is generally about complementing or amplifying what educators and learners already do, it is almost always a mistake to mandate programs from on high. It is vital that technology be adopted with the input of educators and families and with attention to how it will be used. If technology goes unused or if educators use it ineffectually, the results will disappoint—no matter the virtuosity of the technology. Indeed, unused education technology can be an unnecessary expenditure for cash-strapped education systems. This is why surveying context, listening to voices in the field, examining how technology is used, and planning for course correction is essential.

5. It is essential to communicate with a range of stakeholders, including educators, school leaders, parents, and learners .

Technology can feel alien in schools, confuse parents and (especially) older educators, or become an alluring distraction. Good communication can help address all of these risks. Taking care to listen to educators and families can help ensure that programs are informed by their needs and concerns. At the same time, deliberately and consistently explaining what technology is and is not supposed to do, how it can be most effectively used, and the ways in which it can make it more likely that programs work as intended. For instance, if teachers fear that technology is intended to reduce the need for educators, they will tend to be hostile; if they believe that it is intended to assist them in their work, they will be more receptive. Absent effective communication, it is easy for programs to “fail” not because of the technology but because of how it was used. In short, past experience in rolling out education programs indicates that it is as important to have a strong intervention design as it is to have a solid plan to socialize it among stakeholders.

conclusion about technology in education

Beyond reopening: A leapfrog moment to transform education?

On September 14, the Center for Universal Education (CUE) will host a webinar to discuss strategies, including around the effective use of education technology, for ensuring resilient schools in the long term and to launch a new education technology playbook “Realizing the promise: How can education technology improve learning for all?”

file-pdf Full Playbook – Realizing the promise: How can education technology improve learning for all? file-pdf References file-pdf Appendix A – Instruments to assess availability and use of technology file-pdf Appendix B – List of reviewed studies file-pdf Appendix C – How may technology affect interactions among students, teachers, and content?

About the Authors

Alejandro j. ganimian, emiliana vegas, frederick m. hess.

  • Media Relations
  • Terms and Conditions
  • Privacy Policy

New global data reveal education technology’s impact on learning

The promise of technology in the classroom is great: enabling personalized, mastery-based learning; saving teacher time; and equipping students with the digital skills they will need  for 21st-century careers. Indeed, controlled pilot studies have shown meaningful improvements in student outcomes through personalized blended learning. 1 John F. Pane et al., “How does personalized learning affect student achievement?,” RAND Corporation, 2017, rand.org. During this time of school shutdowns and remote learning , education technology has become a lifeline for the continuation of learning.

As school systems begin to prepare for a return to the classroom , many are asking whether education technology should play a greater role in student learning beyond the immediate crisis and what that might look like. To help inform the answer to that question, this article analyzes one important data set: the 2018 Programme for International Student Assessment (PISA), published in December 2019 by the Organisation for Economic Co-operation and Development (OECD).

Every three years, the OECD uses PISA to test 15-year-olds around the world on math, reading, and science. What makes these tests so powerful is that they go beyond the numbers, asking students, principals, teachers, and parents a series of questions about their attitudes, behaviors, and resources. An optional student survey on information and communications technology (ICT) asks specifically about technology use—in the classroom, for homework, and more broadly.

In 2018, more than 340,000 students in 51 countries took the ICT survey, providing a rich data set for analyzing key questions about technology use in schools. How much is technology being used in schools? Which technologies are having a positive impact on student outcomes? What is the optimal amount of time to spend using devices in the classroom and for homework? How does this vary across different countries and regions?

From other studies we know that how education technology is used, and how it is embedded in the learning experience, is critical to its effectiveness. This data is focused on extent and intensity of use, not the pedagogical context of each classroom. It cannot therefore answer questions on the eventual potential of education technology—but it can powerfully tell us the extent to which that potential is being realized today in classrooms around the world.

Five key findings from the latest results help answer these questions and suggest potential links between technology and student outcomes:

  • The type of device matters—some are associated with worse student outcomes.
  • Geography matters—technology is associated with higher student outcomes in the United States than in other regions.
  • Who is using the technology matters—technology in the hands of teachers is associated with higher scores than technology in the hands of students.
  • Intensity matters—students who use technology intensely or not at all perform better than those with moderate use.
  • A school system’s current performance level matters—in lower-performing school systems, technology is associated with worse results.

This analysis covers only one source of data, and it should be interpreted with care alongside other relevant studies. Nonetheless, the 2018 PISA results suggest that systems aiming to improve student outcomes should take a more nuanced and cautious approach to deploying technology once students return to the classroom. It is not enough add devices to the classroom, check the box, and hope for the best.

What can we learn from the latest PISA results?

How will the use, and effectiveness, of technology change post-covid-19.

The PISA assessment was carried out in 2018 and published in December 2019. Since its publication, schools and students globally have been quite suddenly thrust into far greater reliance on technology. Use of online-learning websites and adaptive software has expanded dramatically. Khan Academy has experienced a 250 percent surge in traffic; smaller sites have seen traffic grow fivefold or more. Hundreds of thousands of teachers have been thrown into the deep end, learning to use new platforms, software, and systems. No one is arguing that the rapid cobbling together of remote learning under extreme time pressure represents best-practice use of education technology. Nonetheless, a vast experiment is underway, and innovations often emerge in times of crisis. At this point, it is unclear whether this represents the beginning of a new wave of more widespread and more effective technology use in the classroom or a temporary blip that will fade once students and teachers return to in-person instruction. It is possible that a combination of software improvements, teacher capability building, and student familiarity will fundamentally change the effectiveness of education technology in improving student outcomes. It is also possible that our findings will continue to hold true and technology in the classroom will continue to be a mixed blessing. It is therefore critical that ongoing research efforts track what is working and for whom and, just as important, what is not. These answers will inform the project of reimagining a better education for all students in the aftermath of COVID-19.

PISA data have their limitations. First, these data relate to high-school students, and findings may not be applicable in elementary schools or postsecondary institutions. Second, these are single-point observational data, not longitudinal experimental data, which means that any links between technology and results should be interpreted as correlation rather than causation. Third, the outcomes measured are math, science, and reading test results, so our analysis cannot assess important soft skills and nonacademic outcomes.

It is also worth noting that technology for learning has implications beyond direct student outcomes, both positive and negative. PISA cannot address these broader issues, and neither does this paper.

But PISA results, which we’ve broken down into five key findings, can still provide powerful insights. The assessment strives to measure the understanding and application of ideas, rather than the retention of facts derived from rote memorization, and the broad geographic coverage and sample size help elucidate the reality of what is happening on the ground.

Finding 1: The type of device matters

The evidence suggests that some devices have more impact than others on outcomes (Exhibit 1). Controlling for student socioeconomic status, school type, and location, 2 Specifically, we control for a composite indicator for economic, social, and cultural status (ESCS) derived from questions about general wealth, home possessions, parental education, and parental occupation; for school type “Is your school a public or a private school” (SC013); and for school location (SC001) where the options are a village, hamlet or rural area (fewer than 3,000 people), a small town (3,000 to about 15,000 people), a town (15,000 to about 100,000 people), a city (100,000 to about 1,000,000 people), and a large city (with more than 1,000,000 people). the use of data projectors 3 A projector is any device that projects computer output, slides, or other information onto a screen in the classroom. and internet-connected computers in the classroom is correlated with nearly a grade-level-better performance on the PISA assessment (assuming approximately 40 PISA points to every grade level). 4 Students were specifically asked (IC009), “Are any of these devices available for you to use at school?,” with the choices being “Yes, and I use it,” “Yes, but I don’t use it,” and “No.” We compared the results for students who have access to and use each device with those who do not have access. The full text for each device in our chart was as follows: Data projector, eg, for slide presentations; Internet-connected school computers; Desktop computer; Interactive whiteboard, eg, SmartBoard; Portable laptop or notebook; and Tablet computer, eg, iPad, BlackBerry PlayBook.

On the other hand, students who use laptops and tablets in the classroom have worse results than those who do not. For laptops, the impact of technology varies by subject; students who use laptops score five points lower on the PISA math assessment, but the impact on science and reading scores is not statistically significant. For tablets, the picture is clearer—in every subject, students who use tablets in the classroom perform a half-grade level worse than those who do not.

Some technologies are more neutral. At the global level, there is no statistically significant difference between students who use desktop computers and interactive whiteboards in the classroom and those who do not.

Finding 2: Geography matters

Looking more closely at the reading results, which were the focus of the 2018 assessment, 5 PISA rotates between focusing on reading, science, and math. The 2018 assessment focused on reading. This means that the total testing time was two hours for each student, of which one hour was reading focused. we can see that the relationship between technology and outcomes varies widely by country and region (Exhibit 2). For example, in all regions except the United States (representing North America), 6 The United States is the only country that took the ICT Familiarity Questionnaire survey in North America; thus, we are comparing it as a country with the other regions. students who use laptops in the classroom score between five and 12 PISA points lower than students who do not use laptops. In the United States, students who use laptops score 17 PISA points higher than those who do not. It seems that US students and teachers are doing something different with their laptops than those in other regions. Perhaps this difference is related to learning curves that develop as teachers and students learn how to get the most out of devices. A proxy to assess this learning curve could be penetration—71 percent of US students claim to be using laptops in the classroom, compared with an average of 37 percent globally. 7 The rate of use excludes nulls. The United States measures higher than any other region in laptop use by students in the classroom. US = 71 percent, Asia = 40 percent, EU = 35 percent, Latin America = 31 percent, MENA = 21 percent, Non-EU Europe = 41 percent. We observe a similar pattern with interactive whiteboards in non-EU Europe. In every other region, interactive whiteboards seem to be hurting results, but in non-EU Europe they are associated with a lift of 21 PISA points, a total that represents a half-year of learning. In this case, however, penetration is not significantly higher than in other developed regions.

Finding 3: It matters whether technology is in the hands of teachers or students

The survey asks students whether the teacher, student, or both were using technology. Globally, the best results in reading occur when only the teacher is using the device, with some benefit in science when both teacher and students use digital devices (Exhibit 3). Exclusive use of the device by students is associated with significantly lower outcomes everywhere. The pattern is similar for science and math.

Again, the regional differences are instructive. Looking again at reading, we note that US students are getting significant lift (three-quarters of a year of learning) from either just teachers or teachers and students using devices, while students alone using a device score significantly lower (half a year of learning) than students who do not use devices at all. Exclusive use of devices by the teacher is associated with better outcomes in Europe too, though the size of the effect is smaller.

Finding 4: Intensity of use matters

PISA also asked students about intensity of use—how much time they spend on devices, 8 PISA rotates between focusing on reading, science, and math. The 2018 assessment focused on reading. This means that the total testing time was two hours for each student, of which one hour was reading focused. both in the classroom and for homework. The results are stark: students who either shun technology altogether or use it intensely are doing better, with those in the middle flailing (Exhibit 4).

The regional data show a dramatic picture. In the classroom, the optimal amount of time to spend on devices is either “none at all” or “greater than 60 minutes” per subject per week in every region and every subject (this is the amount of time associated with the highest student outcomes, controlling for student socioeconomic status, school type, and location). In no region is a moderate amount of time (1–30 minutes or 31–60 minutes) associated with higher student outcomes. There are important differences across subjects and regions. In math, the optimal amount of time is “none at all” in every region. 9 The United States is the only country that took the ICT Familiarity Questionnaire survey in North America; thus, we are comparing it as a country with the other regions. In reading and science, however, the optimal amount of time is greater than 60 minutes for some regions: Asia and the United States for reading, and the United States and non-EU Europe for science.

The pattern for using devices for homework is slightly less clear cut. Students in Asia, the Middle East and North Africa (MENA), and non-EU Europe score highest when they spend “no time at all” on devices for their homework, while students spending a moderate amount of time (1–60 minutes) score best in Latin America and the European Union. Finally, students in the United States who spend greater than 60 minutes are getting the best outcomes.

One interpretation of these data is that students need to get a certain familiarity with technology before they can really start using it to learn. Think of typing an essay, for example. When students who mostly write by hand set out to type an essay, their attention will be focused on the typing rather than the essay content. A competent touch typist, however, will get significant productivity gains by typing rather than handwriting.

Would you like to learn more about our Social Sector Practice ?

Finding 5: the school systems’ overall performance level matters.

Diving deeper into the reading outcomes, which were the focus of the 2018 assessment, we can see the magnitude of the impact of device use in the classroom. In Asia, Latin America, and Europe, students who spend any time on devices in their literacy and language arts classrooms perform about a half-grade level below those who spend none at all. In MENA, they perform more than a full grade level lower. In the United States, by contrast, more than an hour of device use in the classroom is associated with a lift of 17 PISA points, almost a half-year of learning improvement (Exhibit 5).

At the country level, we see that those who are on what we would call the “poor-to-fair” stage of the school-system journey 10 Michael Barber, Chinezi Chijoke, and Mona Mourshed, “ How the world’s most improved school systems keep getting better ,” November 2010. have the worst relationships between technology use and outcomes. For every poor-to-fair system taking the survey, the amount of time on devices in the classroom associated with the highest student scores is zero minutes. Good and great systems are much more mixed. Students in some very highly performing systems (for example, Estonia and Chinese Taipei) perform highest with no device use, but students in other systems (for example, Japan, the United States, and Australia) are getting the best scores with over an hour of use per week in their literacy and language arts classrooms (Exhibit 6). These data suggest that multiple approaches are effective for good-to-great systems, but poor-to-fair systems—which are not well equipped to use devices in the classroom—may need to rethink whether technology is the best use of their resources.

What are the implications for students, teachers, and systems?

Looking across all these results, we can say that the relationship between technology and outcomes in classrooms today is mixed, with variation by device, how that device is used, and geography. Our data do not permit us to draw strong causal conclusions, but this section offers a few hypotheses, informed by existing literature and our own work with school systems, that could explain these results.

First, technology must be used correctly to be effective. Our experience in the field has taught us that it is not enough to “add technology” as if it were the missing, magic ingredient. The use of tech must start with learning goals, and software selection must be based on and integrated with the curriculum. Teachers need support to adapt lesson plans to optimize the use of technology, and teachers should be using the technology themselves or in partnership with students, rather than leaving students alone with devices. These lessons hold true regardless of geography. Another ICT survey question asked principals about schools’ capacity using digital devices. Globally, students performed better in schools where there were sufficient numbers of devices connected to fast internet service; where they had adequate software and online support platforms; and where teachers had the skills, professional development, and time to integrate digital devices in instruction. This was true even accounting for student socioeconomic status, school type, and location.

COVID-19 and student learning in the United States: The hurt could last a lifetime

COVID-19 and student learning in the United States: The hurt could last a lifetime

Second, technology must be matched to the instructional environment and context. One of the most striking findings in the latest PISA assessment is the extent to which technology has had a different impact on student outcomes in different geographies. This corroborates the findings of our 2010 report, How the world’s most improved school systems keep getting better . Those findings demonstrated that different sets of interventions were needed at different stages of the school-system reform journey, from poor-to-fair to good-to-great to excellent. In poor-to-fair systems, limited resources and teacher capabilities as well as poor infrastructure and internet bandwidth are likely to limit the benefits of student-based technology. Our previous work suggests that more prescriptive, teacher-based approaches and technologies (notably data projectors) are more likely to be effective in this context. For example, social enterprise Bridge International Academies equips teachers across several African countries with scripted lesson plans using e-readers. In general, these systems would likely be better off investing in teacher coaching than in a laptop per child. For administrators in good-to-great systems, the decision is harder, as technology has quite different impacts across different high-performing systems.

Third, technology involves a learning curve at both the system and student levels. It is no accident that the systems in which the use of education technology is more mature are getting more positive impact from tech in the classroom. The United States stands out as the country with the most mature set of education-technology products, and its scale enables companies to create software that is integrated with curricula. 11 Common Core State Standards sought to establish consistent educational standards across the United States. While these have not been adopted in all states, they cover enough states to provide continuity and consistency for software and curriculum developers. A similar effect also appears to operate at the student level; those who dabble in tech may be spending their time learning the tech rather than using the tech to learn. This learning curve needs to be built into technology-reform programs.

Taken together, these results suggest that systems that take a comprehensive, data-informed approach may achieve learning gains from thoughtful use of technology in the classroom. The best results come when significant effort is put into ensuring that devices and infrastructure are fit for purpose (fast enough internet service, for example), that software is effective and integrated with curricula, that teachers are trained and given time to rethink lesson plans integrating technology, that students have enough interaction with tech to use it effectively, and that technology strategy is cognizant of the system’s position on the school-system reform journey. Online learning and education technology are currently providing an invaluable service by enabling continued learning over the course of the pandemic; this does not mean that they should be accepted uncritically as students return to the classroom.

Jake Bryant is an associate partner in McKinsey’s Washington, DC, office; Felipe Child is a partner in the Bogotá office; Emma Dorn is the global Education Practice manager in the Silicon Valley office; and Stephen Hall is an associate partner in the Dubai office.

The authors wish to thank Fernanda Alcala, Sujatha Duraikkannan, and Samuel Huang for their contributions to this article.

Explore a career with us

Related articles.

COVID-19 and student learning in the United States: The hurt could last a lifetime

Safely back to school after coronavirus closures

How_the_worlds_most_improved_school_systems_keep_getting_better_500_Standard

How the world’s most improved school systems keep getting better

conclusion about technology in education

Global Education Monitoring Report

  • 2023 GEM REPORT

Technology in education

  • Recommendations
  • 2023 Webpage
  • Press Release
  • RELATED PUBLICATIONS
  • Background papers
  • 2021/2 GEM Report
  • 2020 Report
  • 2019 Report
  • 2017/8 Report
  • 2016 Report

A tool on whose terms?

Ismael Martínez Sánchez/ProFuturo

  • Monitoring SDG 4
  • 2023 webpage

Major advances in technology, especially digital technology, are rapidly transforming the world. Information and communication technology (ICT) has been applied for 100 years in education, ever since the popularization of radio in the 1920s. But it is the use of digital technology over the past 40 years that has the most significant potential to transform education. An education technology industry has emerged and focused, in turn, on the development and distribution of education content, learning management systems, language applications, augmented and virtual reality, personalized tutoring, and testing. Most recently, breakthroughs in artificial intelligence (AI), methods have increased the power of education technology tools, leading to speculation that technology could even supplant human interaction in education.

In the past 20 years, learners, educators and institutions have widely adopted digital technology tools. The number of students in MOOCs increased from 0 in 2012 to at least 220 million in 2021. The language learning application Duolingo had 20 million daily active users in 2023, and Wikipedia had 244 million page views per day in 2021. The 2018 PISA found that 65% of 15-year-old students in OECD countries were in schools whose principals agreed that teachers had the technical and pedagogical skills to integrate digital devices in instruction and 54% in schools where an effective online learning support platform was available; these shares are believed to have increased during the COVID-19 pandemic. Globally, the percentage of internet users rose from 16% in 2005 to 66% in 2022. About 50% of the world’s lower secondary schools were connected to the internet for pedagogical purposes in 2022.

The adoption of digital technology has resulted in many changes in education and learning. The set of basic skills that young people are expected to learn in school, at least in richer countries, has expanded to include a broad range of new ones to navigate the digital world. In many classrooms, paper has been replaced by screens and pens by keyboards. COVID-19 can be seen as a natural experiment where learning switched online for entire education systems virtually overnight. Higher education is the subsector with the highest rate of digital technology adoption, with online management platforms replacing campuses. The use of data analytics has grown in education management. Technology has made a wide range of informal learning opportunities accessible.

Yet the extent to which technology has transformed education needs to be debated. Change resulting from the use of digital technology is incremental, uneven and bigger in some contexts than others. The application of digital technology varies by community and socioeconomic level, by teacher willingness and preparedness, by education level, and by country income. Except in the most technologically advanced countries, computers and devices are not used in classrooms on a large scale. Technology use is not universal and will not become so any time soon. Moreover, evidence is mixed on its impact: Some types of technology seem to be effective in improving some kinds of learning. The short- and long-term costs of using digital technology appear to be significantly underestimated. The most disadvantaged are typically denied the opportunity to benefit from this technology.

Too much attention on technology in education usually comes at a high cost. Resources spent on technology, rather than on classrooms, teachers and textbooks for all children in low- and lower-middle-income countries lacking access to these resources are likely to lead to the world being further away from achieving the global education goal, SDG 4. Some of the world’s richest countries ensured universal secondary schooling and minimum learning competencies before the advent of digital technology. Children can learn without it.

However, their education is unlikely to be as relevant without digital technology. The Universal Declaration of Human Rights defines the purpose of education as promoting the ‘full development of the human personality’, strengthening ‘respect for … fundamental freedoms’ and promoting ‘understanding, tolerance and friendship’. This notion needs to move with the times. An expanded definition of the right to education could include effective support by technology for all learners to fulfil their potential, regardless of context or circumstance.

Clear objectives and principles are needed to ensure that technology use is of benefit and avoids harm. The negative and harmful aspects in the use of digital technology in education and society include risk of distraction and lack of human contact. Unregulated technology even poses threats to democracy and human rights, for instance through invasion of privacy and stoking of hatred. Education systems need to be better prepared to teach about and through digital technology, a tool that must serve the best interests of all learners, teachers and administrators. Impartial evidence showing that technology is being used in some places to improve education, and good examples of such use, need to be shared more widely so that the optimal mode of delivery can be assured for each context.

CAN TECHNOLOGY HELP SOLVE THE MOST IMPORTANT CHALLENGES IN EDUCATION?

Discussions about education technology are focused on technology rather than education. The first question should be: What are the most important challenges in education? As a basis for discussion, consider the following three challenges:

  • Equity and inclusion: Is fulfilment of the right to choose the education one wants and to realize one’s full potential through education compatible with the goal of equality? If not, how can education become the great equalizer?
  • Quality: Do education’s content and delivery support societies in achieving sustainable development objectives? If not, how can education help learners to not only acquire knowledge but also be agents of change?
  • Efficiency: Does the current institutional arrangement of teaching learners in classrooms support the achievement of equity and quality? If not, how can education balance individualized instruction and socialization needs?

How best can digital technology be included in a strategy to tackle these challenges, and under what conditions? Digital technology packages and transmits information on an unprecedented scale at high speed and low cost. Information storage has revolutionized the volume of accessible knowledge. Information processing enables learners to receive immediate feedback and, through interaction with machines, adapt their learning pace and trajectory: Learners can organize the sequence of what they learn to suit their background and characteristics. Information sharing lowers the cost of interaction and communication. But while such technology has tremendous potential, many tools have not been designed for application to education. Not enough attention has been given to how they are applied in education and even less to how they should be applied in different education contexts.

On the question of equity and inclusion , ICT – and digital technology in particular – helps lower the education access cost for some disadvantaged groups: Those who live in remote areas are displaced, face learning difficulties, lack time or have missed out on past education opportunities. But while access to digital technology has expanded rapidly, there are deep divides in access. Disadvantaged groups own fewer devices, are less connected to the internet (Figure 1) and have fewer resources at home. The cost of much technology is falling rapidly but is still too high for some. Households that are better off can buy technology earlier, giving them more advantages and compounding disparity. Inequality in access to technology exacerbates existing inequality in access to education, a weakness exposed during the COVID-19 school closures.

Figure 1: Internet connectivity is highly unequal

Percentage of 3- to 17-year-olds with internet connection at home, by wealth quintile, selected countries, 2017–19 Source: UNICEF database.

Education quality is a multifaceted concept. It encompasses adequate inputs (e.g. availability of technology infrastructure), prepared teachers (e.g. teacher standards for technology use in classrooms), relevant content (e.g. integration of digital literacy in the curriculum) and individual learning outcomes (e.g. minimum levels of proficiency in reading and mathematics). But education quality should also encompass social outcomes. It is not enough for students to be vessels receiving knowledge; they need to be able to use it to help achieve sustainable development in social, economic and environmental terms.

There are a variety of views on the extent to which digital technologies can enhance education quality. Some argue that, in principle, digital technology creates engaging learning environments, enlivens student experiences, simulates situations, facilitates collaboration and expands connections. But others say digital technology tends to support an individualized approach to education, reducing learners’ opportunities to socialize and learn by observing each other in real-life settings. Moreover, just as new technology overcomes some constraints, it brings its own problems. Increased screen time has been associated with adverse impact on physical and mental health. Insufficient regulation has led to unauthorized use of personal data for commercial purposes. Digital technology has also helped spread misinformation and hate speech, including through education.

Improvements to efficiency may be the most promising way for digital technology to make a difference in education. Technology is touted as being able to reduce the time students and teachers spend on menial tasks, time that can be used in other, educationally more meaningful activities. However, there are conflicting views on what is meaningful. The way that education technology is used is more complex than just a substitution of resources. Technology may be one-to-many, one-to-one or peer-to-peer technology. It may require students to learn alone or with others, online or offline, independently or networked. It delivers content, creates learner communities and connects teachers with students. It provides access to information. It may be used for formal or informal learning and can assess what has been learned. It is used as a tool for productivity, creativity, communication, collaboration, design and data management. It may be professionally produced or have user-generated content. It may be specific to schools and place-based or transcend time and place. As in any complex system, each technology tool involves distinct infrastructure, design, content and pedagogy, and each may promote different types of learning.

Technology is evolving too fast to permit evaluation that could inform decisions on legislation, policy and regulation. Research on technology in education is as complex as technology itself. Studies evaluate experiences of learners of various ages using various methodologies applied in contexts as different as self-study, classrooms and schools of diverse sizes and features, non-school settings, and at system level. Findings that apply in some contexts are not always replicable elsewhere. Some conclusions can be drawn from long-term studies as technologies mature but there is an endless stream of new products. Meanwhile, not all impact can be easily measured, given technology’s ubiquity, complexity, utility and heterogeneity. In brief, while there is much general research on education technology, the amount of research for specific applications and contexts is insufficient, making it difficult to prove that a particular technology enhances a particular kind of learning.

Why is there often the perception nevertheless that technology can address major education challenges? To understand the discourse around education technology, it is necessary to look behind the language being used to promote it, and the interests it serves. Who frames the problems technology should address? What are the consequences of such framing for education? Who promotes education technology as a precondition for education transformation? How credible are such claims? What criteria and standards need to be set to evaluate digital technology’s current and potential future contribution to education so as to separate hype from substance? Can evaluation go beyond short-term assessments of impact on learning and capture potential far-reaching consequences of the generalized use of digital technology in education?

Exaggerated claims about technology go hand in hand with exaggerated estimates of its global market size. In 2022, business intelligence providers’ estimates ranged from USD 123 billion to USD 300 billion. These accounts are almost always projected forward, predicting optimistic expansion, yet they fail to give historic trends and verify whether past projections proved true. Such reporting routinely characterizes education technology as essential and technology companies as enablers and disruptors. If optimistic projections are not fulfilled, responsibility is implicitly placed on governments as a way of maintaining indirect pressure on them to increase procurement. Education is criticized as being slow to change, stuck in the past and a laggard when it comes to innovation. Such coverage plays on users’ fascination with novelty but also their fear of being left behind.

The sections below further explore the three challenges this report addresses: equity and inclusion (in terms of access to education for disadvantaged groups and access to content), quality (in terms of teaching through and about digital technology) and efficiency (in terms of education management). After identifying technology’s potential to tackle these challenges, it discusses three conditions that need to be met for that potential to be fulfilled: equitable access, appropriate governance and regulation, and sufficient teacher capacity.

EQUITY AND INCLUSION: ACCESS FOR DISADVANTAGED GROUPS

A wide range of technology brings education to hard-to-reach learners. Technology has historically opened up education to learners facing significant obstacles in access to schools or well-trained teachers. Interactive radio instruction is used in nearly 40 countries. In Nigeria, radio instruction combined with print and audiovisual materials has been used since the 1990s, reaching nearly 80% of nomads and increasing their literacy, numeracy and life skills. Television has helped educate marginalized groups, notably in Latin America and the Caribbean. The Telesecundaria programme in Mexico, combining televised lessons with in-class support and extensive teacher training, increased secondary school enrolment by 21%. Mobile learning devices, often the only type of device accessible to disadvantaged learners, have been used in hard-to-reach areas and emergencies to share educational materials; complement in-person or remote channels; and foster interactions between students, teachers and parents, notably during COVID-19. Adults have been the main target of online distance learning, with open universities having increased participation for both working and disadvantaged adults.

Inclusive technology supports accessibility and personalization for learners with disabilities. Assistive technology removes learning and communication barriers, with numerous studies reporting a significant positive impact on academic engagement, social participation and the well-being of learners with disabilities. However, such devices remain inaccessible and unaffordable in many countries, and teachers often lack specialized training to use them effectively in learning environments. While people with disabilities used to rely exclusively on specialized devices to gain access to education, technology platforms and devices are increasingly incorporating accessibility features, which support inclusive, personalized learning for all students.

Technology supports learning continuity in emergencies. Mapping of 101 distance education projects in crisis contexts in 2020 showed that 70% used radio, television and basic mobile phones. During the Boko Haram crisis in Nigeria, the Technology Enhanced Learning for All programme used mobile phones and radios to support the learning continuity of 22,000 disadvantaged children, with recorded improvement in literacy and numeracy skills. However, there are significant gaps in terms of rigorous evaluation of education technology in emergencies, despite some limited recorded impact. Meanwhile, most projects are led by non-state actors as short-term crisis responses, raising sustainability concerns; education ministries implemented only 12% of the 101 projects.

Technology supported learning during COVID-19, but millions were left out. During school closures, 95% of education ministries carried out some form of distance learning, potentially reaching over 1 billion students globally. Many of the resources used during the pandemic were first developed in response to previous emergencies or rural education, with some countries building on decades of experience with remote learning. Sierra Leone revived the Radio Teaching Programme, developed during the Ebola crisis, one week after schools closed. Mexico expanded content from its Telesecundaria programme to all levels of education. However, at least half a billion, or 31% of students worldwide – mostly the poorest (72%) and those in rural areas (70%) – could not be reached by remote learning. Although 91% of countries used online learning platforms to deliver distance learning during school closures, the platforms only reached a quarter of students globally. For the rest, low-tech interventions such as radio and television were largely used, in combination with paper-based materials and mobile phones for increased interactivity.

Some countries are expanding existing platforms to reach marginalized groups. Less than half of all countries developed long-term strategies for increasing their resilience and the sustainability of interventions as part of their COVID-19 response plans. Many have abandoned distance learning platforms developed during COVID-19, while others are repurposing them to reach marginalized learners. The digital platform set up in Ukraine during the pandemic was expanded once the war broke out in 2022, allowing 85% of schools to complete the academic year.

conclusion about technology in education

EQUITY AND INCLUSION: ACCESS TO CONTENT

Technology facilitates content creation and adaptation. Open educational resources (OERs) encourage the reuse and repurposing of materials to cut development time, avoid duplication of work and make materials more context-specific or relevant to learners. They also significantly reduce the cost of access to content. In the US state of North Dakota, an initial investment of USD 110,000 to shift to OERs led to savings of over USD 1 million in student costs. Social media increases access to user-generated content. YouTube, a major player in both formal and informal learning, is used by about 80% of the world’s top 113 universities. Moreover, collaborative digital tools can improve the diversity and quality of content creation. In South Africa, the Siyavule initiative supported tutor collaboration on the creation of primary and secondary education textbooks.

Digitization of educational content simplifies access and distribution. Many countries, including Bhutan and Rwanda, have created static digital versions of traditional textbooks to increase availability. Others, including India and Sweden, have produced digital textbooks that encourage interactivity and multimodal learning. Digital libraries and educational content repositories such as the National Academic Digital Library of Ethiopia, National Digital Library of India and Teachers Portal in Bangladesh help teachers and learners find relevant materials. Learning management platforms, which have become a key part of the contemporary learning environment, help organize content by integrating digital resources into course structures.

Open access resources help overcome barriers. Open universities and MOOCs can eliminate time, location and cost barriers to access. In Indonesia, where low participation in tertiary education is largely attributed to geographical challenges, MOOCs play an important role in expanding access to post-secondary learning. During COVID-19, MOOC enrolment surged, with the top three providers adding as many users in April 2020 as in all of 2019. Technology can also remove language barriers. Translation tools help connect teachers and learners from various countries and increase the accessibility of courses by non-native students.

Ensuring and assessing the quality of digital content is difficult. The sheer quantity of content and its decentralized production pose logistical challenges for evaluation. Several strategies have been implemented to address this. China established specific quality criteria for MOOCs to be nationally recognized. The European Union developed its OpenupED quality label. India strengthened the link between non-formal and formal education. Micro-credentials are increasingly used to ensure that institution and learner both meet minimum standards. Some platforms aim to improve quality by recentralizing content production. YouTube, for example, has been funnelling financing and resources to a few trusted providers and partnering with well-established education institutions.

Technology may reinforce existing inequality in both access to and production of content. Privileged groups still produce most content. A study of higher-education repositories with OER collections found that nearly 90% were created in Europe or North America; 92% of the material in the OER Commons global library is in English. This influences who has access to digital content. MOOCs, for example, mainly benefit educated learners – studies have shown around 80% of participants on major platforms already have a tertiary degree – and those from richer countries. The disparity is due to divides in digital skills, internet access, language and course design. Regional MOOCs cater to local needs and languages but can also worsen inequality.

TEACHING AND LEARNING

Technology has been used to support teaching and learning in multiple ways. Digital technology offers two broad types of opportunities. First, it can improve instruction by addressing quality gaps, increasing opportunities to practise, increasing available time and personalizing instruction. Second, it can engage learners by varying how content is represented, stimulating interaction and prompting collaboration. Systematic reviews over the past two decades on technology’s impact on learning find small to medium-sized positive effects compared to traditional instruction. However, evaluations do not always isolate technology’s impact in an intervention, making it difficult to attribute positive effects to technology alone rather than to other factors, such as added instruction time, resources or teacher support. Technology companies can have disproportionate influence on evidence production. For example, Pearson funded studies contesting independent analysis that showed its products had no impact.

The prevalence of ICT use in classrooms is not high, even in the world’s richest countries. The 2018 PISA found that only about 10% of 15-year-old students in over 50 participating education systems used digital devices for more than an hour a week in mathematics and science lessons, on average (Figure 2) . The 2018 International Computer and Information Literacy Study (ICILS) showed that in the 12 participating education systems, simulation and modelling software in classrooms was available to just over one third of students, with country levels ranging from 8% in Italy to 91% in Finland.

Figure 2: Even in upper-middle- and high-income countries, technology use in mathematics and science classrooms is limited

Percentage of 15-year-old students who used digital devices for at least one hour per week in mathematics or science classroom lessons, selected upper-middle- and high-income countries, 2018 Source: 2018 PISA database.

Recorded lessons can address teacher quality gaps and improve teacher time allocation. In China, lesson recordings from high-quality urban teachers were delivered to 100 million rural students. An impact evaluation showed improvements in Chinese skills by 32% and a 38% long-term reduction in the rural–urban earning gap. However, just delivering materials without contextualizing and providing support is insufficient. In Peru, the One Laptop Per Child programme distributed over 1 million laptops loaded with content, but no positive impact on learning resulted, partly due to the focus on provision of devices instead of the quality of pedagogical integration.

Enhancing technology-aided instruction with personalization can improve some types of learning. Personalized adaptive software generates analytics that can help teachers track student progress, identify error patterns, provide differentiated feedback and reduce workload on routine tasks. Evaluations of the use of a personalized adaptive software in India documented learning gains in after-school settings and for low-performing students. However, not all widely used software interventions have strong evidence of positive effects compared to teacher-led instruction. A meta-analysis of studies on an AI learning and assessment system that has been used by over 25 million students in the United States found it was no better than traditional classroom teaching in improving outcomes.

Varied interaction and visual representation can enhance student engagement. A meta-analysis of 43 studies published from 2008 to 2019 found that digital games improved cognitive and behavioural outcomes in mathematics. Interactive whiteboards can support teaching and learning if well integrated in pedagogy; but in the United Kingdom, despite large-scale adoption, they were mostly used to replace blackboards. Augmented, mixed or virtual reality used as an experiential learning tool for repeated practice in life-like conditions in technical, vocational and scientific subjects is not always as effective as real-life training but may be superior to other digital methods, such as video demonstrations.

Technology offers teachers low-cost and convenient ways to communicate with parents. The Colombian Institute of Family Welfare’s distance education initiative, which targeted 1.7 million disadvantaged children, relied on social media platforms to relay guidance to caregivers on pedagogical activities at home. However, uptake and effectiveness of behavioural interventions targeting caregivers are limited by parental education levels, as well as lack of time and material resources.

Student use of technology in classrooms and at home can be distracting, disrupting learning. A meta-analysis of research on student mobile phone use and its impact on education outcomes, covering students from pre-primary to higher education in 14 countries, found a small negative effect, and a larger one at the university level. Studies using PISA data indicate a negative association between ICT use and student performance beyond a threshold of moderate use. Teachers perceive tablet and phone use as hampering classroom management. More than one in three teachers in seven countries participating in the 2018 ICILS agreed that ICT use in classrooms distracted students. Online learning relies on student ability to self-regulate and may put low-performing and younger learners at increased risk of disengagement.

DIGITAL SKILLS

The definition of digital skills has been evolving along with digital technology. An analysis for this report shows that 54% of countries have identified digital skills standards for learners. The Digital Competence Framework for Citizens (DigComp), developed on behalf of the European Commission, has five competence areas: information and data literacy, communication and collaboration, digital content creation, safety, and problem-solving. Some countries have adopted digital skills frameworks developed by non-state, mostly commercial, actors. The International Computer Driving Licence (ICDL) has been promoted as a ‘digital skills standard’ but is associated mainly with Microsoft applications. Kenya and Thailand have endorsed the ICDL as the digital literacy standard for use in schools.

Digital skills are unequally distributed. In the 27 European Union (EU) countries, 54% of adults had at least basic digital skills in 2021. In Brazil, 31% of adults had at least basic skills, but the level was twice as high in urban as in rural areas, three times as high among those in the labour force as among those outside it, and nine times as high in the top socioeconomic group as in the two bottom groups. The overall gender gap in digital skills is small, but wider in specific skills. In 50 countries, 6.5% of males and 3.2% of females could write a computer program. In Belgium, Hungary and Switzerland, no more than 2 women for every 10 men could program; in Albania, Malaysia and Palestine, 9 women for every 10 men could do so. According to the 2018 PISA, 5% of 15-year-olds with the strongest reading skills but 24% of those with the weakest ones were at risk of being misled by a typical phishing email.

Formal skills training may not be the main way of acquiring digital skills. About one quarter of adults in EU countries, ranging from 16% in Italy to 40% in Sweden, had acquired skills through a ‘formalised educational institution’. Informal learning, such as self-study and informal assistance from colleagues, relatives and friends, was used by twice as many. Still, formal education is important: In 2018, those with tertiary education in Europe were twice as likely (18%) as those with upper secondary education (9%) to engage in free online training or self-study to improve their computer, software or application use. Solid mastery of literacy and numeracy skills is positively associated with mastery of at least some digital skills.

A curriculum content mapping of 16 education systems showed that Greece and Portugal dedicated less than 10% of the curriculum to data and media literacy while Estonia and the Republic of Korea embedded both in half their curricula. In some countries, media literacy in curricula is explicitly connected to critical thinking in subject disciplines, as under Georgia’s New School Model. Asia is characterized by a protectionist approach to media literacy that prioritizes information control over education. But in the Philippines, the Association for Media and Information Literacy successfully advocated for incorporation of media and information literacy in the curriculum, and it is now a core subject in grades 11 and 12.

Digital skills in communication and collaboration matter in hybrid learning arrangements. Argentina promoted teamwork skills as part of a platform for programming and robotics competitions in primary and secondary education. Mexico offers teachers and students digital education resources and tools for remote collaboration, peer learning and knowledge sharing. Ethical digital behaviour includes rules, conventions and standards to be learned, understood and practised by digital users when using digital spaces. Digital communication’s anonymity, invisibility, asynchronicity and minimization of authority can make it difficult for individuals to understand its complexities.

Competences in digital content creation include selecting appropriate delivery formats and creating copy, audio, video and visual assets; integrating digital content; and respecting copyright and licences. The ubiquitous use of social media has turned content creation into a skill with direct application in electronic commerce. In Indonesia, the Siberkreasi platform counts collaborative engagement among its core activities. The Kenya Copyright Board collaborates closely with universities to provide copyright education and conducts frequent training sessions for students in the visual arts and ICT.

Education systems need to strengthen preventive measures and respond to many safety challenges, from passwords to permissions, helping learners understand the implications of their online presence and digital footprint. In Brazil, 29% of schools have conducted debates or lectures on privacy and data protection. In New Zealand, the Te Mana Tūhono (Power of Connectivity) programme delivers digital protection and security services to almost 2,500 state and state-integrated schools. A systematic review of interventions in Australia, Italy, Spain and the United States estimated that the average programme had a 76% chance of reducing cyberbullying perpetration. In Wales, United Kingdom, the government has advised schools how to prepare for and respond to harmful viral online content and hoaxes.

The definition of problem-solving skills varies widely among education systems. Many countries perceive them in terms of coding and programming and as part of a computer science curriculum that includes computational thinking, algorithm use and automation. A global review estimated that 43% of students in high-income countries, 62% in upper-middle-income, 5% in lower-middle-income but no students in low-income countries take computer science as compulsory in primary and/or secondary education. Only 20% of education systems require schools to offer computer science as an elective or core course. Non-state actors often support coding and programming skills. In Chile, Code.org has partnered with the government to provide educational resources in computer science.

EDUCATION MANAGEMENT

Education management information systems focus on efficiency and effectiveness. Education reforms have been characterized by increased school autonomy, target setting and results-based performance, all of which require more data. By one measure, since the 1990s, the number of policies making reference to data, statistics and information has increased by 13 times in high-income, 9 times in upper-middle-income, and 5 times in low- and lower-middle-income countries. But only 54% of countries globally – and as low as 22% in sub-Saharan Africa – have unique student identification mechanisms.

Geospatial data can support education management. Geographical information systems help address equity and efficiency in infrastructure and resource distribution in education systems. School mapping has been used to foster diversity and reduce inequality of opportunity. Ireland links three databases to decide in which of its 314 planning areas to build new schools. Geospatial data can identify areas where children live too far from the nearest school. For instance, it has been estimated that 5% of the population in Guatemala and 41% in the United Republic of Tanzania live more than 3 kilometres away from the nearest primary school.

Education management information systems struggle with data integration. In 2017, Malaysia introduced the Education Data Repository as part of its 2019–23 ICT Transformation Plan to progressively integrate its 350 education data systems and applications scattered across institutions. By 2019, it had integrated 12 of its main data systems, aiming for full integration through a single data platform by the end of 2023. In New Zealand, schools had been procuring student management systems independently and lack of interoperability between them was preventing authorities from tracking student progress. In 2019, the government began setting up the National Learner Repository and Data Exchange to be hosted in cloud data centres, but deployment was paused in 2021 due to cybersecurity concerns. European countries have been addressing interoperability concerns collectively to facilitate data sharing between countries and across multiple applications used in higher-education management through the EMREX project.

Computer-based assessments and computer adaptive testing have been replacing many paper-based assessments. They reduce test administration costs, improve measurement quality and provide rapid scoring. As more examinations shift online, the need for online cheating detection and proctoring tools has also increased. While these can reduce cheating, their effectiveness should be weighed against fairness and psychological effects. Evidence on the quality and usefulness of technology-based assessments has started to emerge, but much less is known about cost efficiency. Among 34 papers on technology-based assessments reviewed for this report, transparent data on cost were lacking.

Learning analytics can increase formative feedback and enable early detection systems. In China, learning analytics has been used to identify learners’ difficulties, predict learning trajectories and manage teacher resources. In the United States, Course Signals is a system used to flag the likelihood of a student not passing a course; educators can then target them for additional support. However, learning analytics requires all actors to have sufficient data literacy. Successful education systems typically have absorptive capacity, including strong school leaders and confident teachers willing to innovate. Yet often seemingly trivial issues, such as maintenance and repair, are ignored or underestimated.

ACCESS TO TECHNOLOGY: EQUITY, EFFICIENCY AND SUSTAINABILITY

Access to electricity and devices is highly unequal between and within countries. In 2021, almost 9% of the global population – and more than 70% of people in rural sub-Saharan Africa – lacked access to electricity. Globally, one in four primary schools do not have electricity. A 2018 study in Cambodia, Ethiopia, Kenya, Myanmar, Nepal and Niger found that 31% of public schools were on grid and 9% were off grid, with only 16% enjoying uninterrupted power supply. Globally, 46% of households had a computer at home in 2020; the share of schools with computers for pedagogical purposes was 47% in primary, 62% in lower secondary and 76% in upper secondary education. There were at most 10 computers per 100 students in Brazil and Morocco but 160 computers per 100 students in Luxembourg, according to the 2018 PISA.

Internet access, a vital enabler of economic, social and cultural rights, is also unequal. In 2022, two in three people globally used the internet. In late 2021, 55% of the world’s population had mobile broadband access. In low- and middle-income countries, 16% less women than men used mobile internet in 2021. An estimated 3.2 billion people do not use mobile internet services despite being covered by a mobile broadband network. Globally, 40% of primary, 50% of lower secondary and 65% of upper secondary schools are connected to the internet. In India, 53% of private unaided and 44% of private aided schools are connected, compared with only 14% of government schools.

Various policies are used to improve access to devices. Some one in five countries have policies granting subsidies or deductions to buy devices. One-to-one technology programmes were established in 30% of countries at one time; currently only 15% of countries pursue such programmes. A number of upper-middle- and high-income countries are shifting from providing devices to allowing students to use their own devices in school. Jamaica adopted a Bring Your Own Device policy framework in 2020 to aim for sustainability.

Some countries champion free and open source software. Education institutions with complex ICT infrastructure, such as universities, can benefit from open source software to add new solutions or functionalities. By contrast, proprietary software does not permit sharing and has vendor locks that hinder interoperability, exchange and updates. In India, the National e-Governance Plan makes it mandatory for all software applications and services used in government to be built on open source software to achieve efficiency, transparency, reliability and affordability.

Countries are committed to universal internet provision at home and in school. About 85% of countries have policies to improve school or learner connectivity and 38% have laws on universal internet provision. A review of 72 low- and middle-income countries found that 29 had used universal service funds to reduce costs for underserved groups. In Kyrgyzstan, renegotiated contracts helped cut prices by nearly half and almost doubled internet speed. In Costa Rica, the Hogares Conectados (Connected Households) programme, which provided an internet cost subsidy to the poorest 60% of households with school-age children, helped reduce the share of unconnected households from 41% in 2016 to 13% in 2019. Zero-rating, or providing free internet access for education or other purposes, has been used, especially during COVID-19, but is not without problems, as it violates the net neutrality principle.

Education technology is often underutilized. In the United States, an average of 67% of education software licences were unused and 98% were not used intensively. According to the EdTech Genome Project, 85% of some 7,000 pedagogical tools, which cost USD 13 billion, were ‘either a poor fit or implemented incorrectly’. Less than one in five of the top 100 education technology tools used in classrooms met the requirements of the US Every Student Succeeds Act. Research had been published for 39% of these tools but the research was aligned with the act in only 26% of cases.

Evidence needs to drive education technology decisions. A review in the United Kingdom found that only 7% of education technology companies had conducted randomized controlled trials, 12% had used third-party certification and 18% had engaged in academic studies. An online survey of teachers and administrators in 17 US states showed that only 11% requested peer-reviewed evidence prior to adopting education technology. Recommendations influence purchase decisions, yet ratings can be manipulated through fake reviews disseminated on social media. Few governments try to fill the evidence gap, so demand has grown for independent reviews. Edtech Tulna, a partnership between a private think tank and a public university in India, offers quality standards, an evaluation toolkit and publicly available expert reviews.

Education technology procurement decisions need to take economic, social and environmental sustainability into account. With respect to economic considerations, it is estimated that initial investment in education technology accounts for just 25% or less of the eventual total cost. Regarding social concerns, procurement processes need to address equity, accessibility, local ownership and appropriation. In France, the Territoires Numériques Educatifs (Digital Educational Territories) initiative was criticized because not all subsidized equipment met local needs, and local governments were left out of the decisions on which equipment to purchase. Both issues have since been addressed. Concerning environmental considerations, it has been estimated that extending the lifespan of all laptops in the European Union by a year would save the equivalent of taking almost 1 million cars off the road in terms of CO2 emissions.

Regulation needs to address risks in education technology procurement. Public procurement is vulnerable to collusion and corruption. In 2019, Brazil’s Comptroller General of the Union found irregularities in the electronic bidding process for the purchase of 1.3 million computers, laptops and notebooks for state and municipal public schools. Decentralizing public procurement to local governments is one way to balance some of the risks. Indonesia has used its SIPLah e-commerce platform to support school-level procurement processes. However, decentralization is vulnerable to weak organizational capacity. A survey of administrators in 54 US school districts found that they had rarely carried out needs assessments.

GOVERNANCE AND REGULATION

Governance of the education technology system is fragmented. A department or an agency responsible for education technology has been identified in 82% of countries. Placing education ministries in charge of education technology strategies and plans could help ensure that decisions are primarily based on pedagogical principles. However, this is the case in just 58% of countries. In Kenya, the 2019 National Information, Communications and Technology Policy led the Ministry of Information, Communications and Technology to integrate ICT at all levels of education.

Participation is often limited in the development of education technology strategies and plans. Nepal established a Steering and a Coordination Committee under the 2013–17 ICT in Education Master Plan for intersectoral and inter-agency coordination and cooperation in its implementation. Including administrators, teachers and students can help bridge the knowledge gap with decision makers to ensure that education technology choices are appropriate. In 2022, only 41% of US education sector leaders agreed that they were regularly included in planning and strategic conversations about technology.

The private sector’s commercial interests can clash with government equity, quality and efficiency goals. In India, the government alerted families about the hidden costs of free online content. Other risks relate to data use and protection, privacy, interoperability and lock-in effects, whereby students and teachers are compelled to use specific software or platforms. Google, Apple and Microsoft produce education platforms tied to particular hardware and operating systems.

Privacy risks to children make their learning environment unsafe. One analysis found that 89% of 163 education technology products recommended for children’s learning during the COVID-19 pandemic could or did watch children outside school hours or education settings. In addition, 39 of 42 governments providing online education during the pandemic fostered uses that ‘risked or infringed’ upon children’s rights. Data used for predictive algorithms can bias predictions and decisions and lead to discrimination, privacy violations and exclusion of disadvantaged groups. The Cyberspace Administration of China and the Ministry of Education introduced regulations in 2019 requiring parental consent before devices powered by AI, such as cameras and headbands, could be used with students in schools and required data to be encrypted.

Children’s exposure to screen time has increased. A survey of screen time of parents of 3- to 8-year-olds in Australia, China, Italy, Sweden and the United States found that their children’s screen exposure increased by 50 minutes during the pandemic for both education and leisure. Extended screen time can negatively affect self-control and emotional stability, increasing anxiety and depression. Few countries have strict regulations on screen time. In China, the Ministry of Education limited the use of digital devices as teaching tools to 30% of overall teaching time. Less than one in four countries are banning the use of smartphones in schools. Italy and the United States have banned the use of specific tools or social media from schools. Cyberbullying and online abuse are rarely defined as offences but can fall under existing laws, such as stalking laws as in Australia and harassment laws in Indonesia.

Monitoring of data protection law implementation is needed. Only 16% of countries explicitly guarantee data privacy in education by law and 29% have a relevant policy, mainly in Europe and Northern America. The number of cyberattacks in education is rising. Such attacks increase exposure to theft of identity and other personal data, but capacity and funds to address the issue are often insufficient. Globally, 5% of all ransomware attacks targeted the education sector in 2022, accounting for more than 30% of cybersecurity breaches. Regulations on sharing children’s personal information are rare but are starting to emerge under the EU’s General Data Protection Regulation. China and Japan have binding instruments on protecting children’s data and information.

Technology has an impact on the teaching profession. Technology allows teachers to choose, modify and generate educational materials. Personalized learning platforms offer teachers customized learning paths and insights based on student data. During the COVID-19 pandemic, France facilitated access to 17 online teaching resource banks mapped against the national curriculum. The Republic of Korea temporarily eased copyright restrictions for teachers. Online teacher-student collaboration platforms provide access to support services, facilitate work team creation, allow participation in virtual sessions and promote sharing of learning materials.

Obstacles to integrating technology in education prevent teachers from fully embracing it. Inadequate digital infrastructure and lack of devices hinder teachers’ ability to integrate technology in their practice. A survey in 165 countries during the pandemic found that two in five teachers used their own devices, and almost one third of schools had only one device for education use. Some teachers lack training to use digital devices effectively. Older teachers may struggle to keep up with rapidly changing technology. The 2018 Teaching and Learning International Survey (TALIS) found that older teachers in 48 education systems had weaker skills and lower self-efficacy in using ICT. Some teachers may lack confidence. Only 43% of lower secondary school teachers in the 2018 TALIS said they felt prepared to use technology for teaching after training, and 78% of teachers in the 2018 ICILS were not confident in using technology for assessment.

Education systems support teachers in developing technology-related professional competencies. About half of education systems worldwide have ICT standards for teachers in a competency framework, teacher training framework, development plan or strategy. Education systems set up annual digital education days for teachers, promote OER, support the exchange of experiences and resources between teachers, and offer training. One quarter of education systems have legislation to ensure teachers are trained in technology, either through initial or in-service training. Some 84% of education systems have strategies for in-service teacher professional development, compared with 72% for pre-service teacher education in technology. Teachers can identify their development needs using digital self-assessment tools such as that provided by the Centre for Innovation in Brazilian Education.

Technology is changing teacher training. Technology is used to create flexible learning environments, engage teachers in collaborative learning, support coaching and mentoring, increase reflective practice, and improve subject or pedagogical knowledge. Distance education programmes have promoted teacher learning in South Africa and even equalled the impact of in-person training in Ghana. Virtual communities have emerged, primarily through social networks, for communication and resource sharing. About 80% of teachers surveyed in the Caribbean belonged to professional WhatsApp groups and 44% used instant messaging to collaborate at least once a week. In Senegal, the Reading for All programme used in-person and online coaching. Teachers considered face-to-face coaching more useful, but online coaching cost 83% less and still achieved a significant, albeit small, improvement in how teachers guided students’ reading practice. In Flanders, Belgium, KlasCement, a teacher community network created by a non-profit and now run by the Ministry of Education, expanded access to digital education and provided a platform for discussions on distance education during the pandemic.

Many actors support teacher professional development in ICT. Universities, teacher training institutions and research institutes provide specialized training, research opportunities and partnerships with schools for professional development in ICT. In Rwanda, universities collaborated with teachers and the government to develop the ICT Essentials for Teachers course. Teacher unions also advocate for policies that support teachers. The Confederation of Education Workers of the Argentine Republic established the right of teachers to disconnect. Civil society organizations, including the Carey Institute for Global Good, offer support through initiatives such as providing OER and online courses for refugee teachers in Chad, Kenya, Lebanon and Niger.

conclusion about technology in education

This site belongs to UNESCO's International Institute for Educational Planning

Home

IIEP Learning Portal

conclusion about technology in education

Search form

  • issue briefs
  • Improve learning

Information and communication technology (ICT) in education

Information and communications technology (ict) can impact student learning when teachers are digitally literate and understand how to integrate it into curriculum..

Schools use a diverse set of ICT tools to communicate, create, disseminate, store, and manage information.(6) In some contexts, ICT has also become integral to the teaching-learning interaction, through such approaches as replacing chalkboards with interactive digital whiteboards, using students’ own smartphones or other devices for learning during class time, and the “flipped classroom” model where students watch lectures at home on the computer and use classroom time for more interactive exercises.

When teachers are digitally literate and trained to use ICT, these approaches can lead to higher order thinking skills, provide creative and individualized options for students to express their understandings, and leave students better prepared to deal with ongoing technological change in society and the workplace.(18)

ICT issues planners must consider include: considering the total cost-benefit equation, supplying and maintaining the requisite infrastructure, and ensuring investments are matched with teacher support and other policies aimed at effective ICT use.(16)

Issues and Discussion

Digital culture and digital literacy: Computer technologies and other aspects of digital culture have changed the ways people live, work, play, and learn, impacting the construction and distribution of knowledge and power around the world.(14) Graduates who are less familiar with digital culture are increasingly at a disadvantage in the national and global economy. Digital literacy—the skills of searching for, discerning, and producing information, as well as the critical use of new media for full participation in society—has thus become an important consideration for curriculum frameworks.(8)

In many countries, digital literacy is being built through the incorporation of information and communication technology (ICT) into schools. Some common educational applications of ICT include:

  • One laptop per child: Less expensive laptops have been designed for use in school on a 1:1 basis with features like lower power consumption, a low cost operating system, and special re-programming and mesh network functions.(42) Despite efforts to reduce costs, however, providing one laptop per child may be too costly for some developing countries.(41)
  • Tablets: Tablets are small personal computers with a touch screen, allowing input without a keyboard or mouse. Inexpensive learning software (“apps”) can be downloaded onto tablets, making them a versatile tool for learning.(7)(25) The most effective apps develop higher order thinking skills and provide creative and individualized options for students to express their understandings.(18)
  • Interactive White Boards or Smart Boards : Interactive white boards allow projected computer images to be displayed, manipulated, dragged, clicked, or copied.(3) Simultaneously, handwritten notes can be taken on the board and saved for later use. Interactive white boards are associated with whole-class instruction rather than student-centred activities.(38) Student engagement is generally higher when ICT is available for student use throughout the classroom.(4)
  • E-readers : E-readers are electronic devices that can hold hundreds of books in digital form, and they are increasingly utilized in the delivery of reading material.(19) Students—both skilled readers and reluctant readers—have had positive responses to the use of e-readers for independent reading.(22) Features of e-readers that can contribute to positive use include their portability and long battery life, response to text, and the ability to define unknown words.(22) Additionally, many classic book titles are available for free in e-book form.
  • Flipped Classrooms: The flipped classroom model, involving lecture and practice at home via computer-guided instruction and interactive learning activities in class, can allow for an expanded curriculum. There is little investigation on the student learning outcomes of flipped classrooms.(5) Student perceptions about flipped classrooms are mixed, but generally positive, as they prefer the cooperative learning activities in class over lecture.(5)(35)

ICT and Teacher Professional Development: Teachers need specific professional development opportunities in order to increase their ability to use ICT for formative learning assessments, individualized instruction, accessing online resources, and for fostering student interaction and collaboration.(15) Such training in ICT should positively impact teachers’ general attitudes towards ICT in the classroom, but it should also provide specific guidance on ICT teaching and learning within each discipline. Without this support, teachers tend to use ICT for skill-based applications, limiting student academic thinking.(32) To sup­port teachers as they change their teaching, it is also essential for education managers, supervisors, teacher educators, and decision makers to be trained in ICT use.(11)

Ensuring benefits of ICT investments: To ensure the investments made in ICT benefit students, additional conditions must be met. School policies need to provide schools with the minimum acceptable infrastructure for ICT, including stable and affordable internet connectivity and security measures such as filters and site blockers. Teacher policies need to target basic ICT literacy skills, ICT use in pedagogical settings, and discipline-specific uses. (21) Successful imple­mentation of ICT requires integration of ICT in the curriculum. Finally, digital content needs to be developed in local languages and reflect local culture. (40) Ongoing technical, human, and organizational supports on all of these issues are needed to ensure access and effective use of ICT. (21)

Resource Constrained Contexts: The total cost of ICT ownership is considerable: training of teachers and administrators, connectivity, technical support, and software, amongst others. (42) When bringing ICT into classrooms, policies should use an incremental pathway, establishing infrastructure and bringing in sustainable and easily upgradable ICT. (16) Schools in some countries have begun allowing students to bring their own mobile technology (such as laptop, tablet, or smartphone) into class rather than providing such tools to all students—an approach called Bring Your Own Device. (1)(27)(34) However, not all families can afford devices or service plans for their children. (30) Schools must ensure all students have equitable access to ICT devices for learning.

Inclusiveness Considerations

Digital Divide: The digital divide refers to disparities of digital media and internet access both within and across countries, as well as the gap between people with and without the digital literacy and skills to utilize media and internet.(23)(26)(31) The digital divide both creates and reinforces socio-economic inequalities of the world’s poorest people. Policies need to intentionally bridge this divide to bring media, internet, and digital literacy to all students, not just those who are easiest to reach.

Minority language groups: Students whose mother tongue is different from the official language of instruction are less likely to have computers and internet connections at home than students from the majority. There is also less material available to them online in their own language, putting them at a disadvantage in comparison to their majority peers who gather information, prepare talks and papers, and communicate more using ICT. (39) Yet ICT tools can also help improve the skills of minority language students—especially in learning the official language of instruction—through features such as automatic speech recognition, the availability of authentic audio-visual materials, and chat functions. (2)(17)

Students with different styles of learning: ICT can provide diverse options for taking in and processing information, making sense of ideas, and expressing learning. Over 87% of students learn best through visual and tactile modalities, and ICT can help these students ‘experience’ the information instead of just reading and hearing it. (20)(37) Mobile devices can also offer programmes (“apps”) that provide extra support to students with special needs, with features such as simplified screens and instructions, consistent placement of menus and control features, graphics combined with text, audio feedback, ability to set pace and level of difficulty, appropriate and unambiguous feedback, and easy error correction. (24)(29)

Plans and policies

  • India [ PDF ]
  • Detroit, USA [ PDF ]
  • Finland [ PDF ]
  • Alberta Education. 2012. Bring your own device: A guide for schools . Retrieved from http://education.alberta.ca/admin/technology/research.aspx
  • Alsied, S.M. and Pathan, M.M. 2015. ‘The use of computer technology in EFL classroom: Advantages and implications.’ International Journal of English Language and Translation Studies . 1 (1).
  • BBC. N.D. ‘What is an interactive whiteboard?’ Retrieved from http://www.bbcactive.com/BBCActiveIdeasandResources/Whatisaninteractivewhiteboard.aspx
  • Beilefeldt, T. 2012. ‘Guidance for technology decisions from classroom observation.’ Journal of Research on Technology in Education . 44 (3).
  • Bishop, J.L. and Verleger, M.A. 2013. ‘The flipped classroom: A survey of the research.’ Presented at the 120th ASEE Annual Conference and Exposition. Atlanta, Georgia.
  • Blurton, C. 2000. New Directions of ICT-Use in Education . United National Education Science and Culture Organization (UNESCO).
  • Bryant, B.R., Ok, M., Kang, E.Y., Kim, M.K., Lang, R., Bryant, D.P. and Pfannestiel, K. 2015. ‘Performance of fourth-grade students with learning disabilities on multiplication facts comparing teacher-mediated and technology-mediated interventions: A preliminary investigation. Journal of Behavioral Education. 24.
  • Buckingham, D. 2005. Educación en medios. Alfabetización, aprendizaje y cultura contemporánea, Barcelona, Paidós.
  • Buckingham, D., Sefton-Green, J., and Scanlon, M. 2001. 'Selling the Digital Dream: Marketing Education Technologies to Teachers and Parents.'  ICT, Pedagogy, and the Curriculum: Subject to Change . London: Routledge.
  • "Burk, R. 2001. 'E-book devices and the marketplace: In search of customers.' Library Hi Tech 19 (4)."
  • Chapman, D., and Mählck, L. (Eds). 2004. Adapting technology for school improvement: a global perspective. Paris: International Institute for Educational Planning.
  • Cheung, A.C.K and Slavin, R.E. 2012. ‘How features of educational technology applications affect student reading outcomes: A meta-analysis.’ Educational Research Review . 7.
  • Cheung, A.C.K and Slavin, R.E. 2013. ‘The effectiveness of educational technology applications for enhancing mathematics achievement in K-12 classrooms: A meta-analysis.’ Educational Research Review . 9.
  • Deuze, M. 2006. 'Participation Remediation Bricolage - Considering Principal Components of a Digital Culture.' The Information Society . 22 .
  • Dunleavy, M., Dextert, S. and Heinecke, W.F. 2007. ‘What added value does a 1:1 student to laptop ratio bring to technology-supported teaching and learning?’ Journal of Computer Assisted Learning . 23.
  • Enyedy, N. 2014. Personalized Instruction: New Interest, Old Rhetoric, Limited Results, and the Need for a New Direction for Computer-Mediated Learning . Boulder, CO: National Education Policy Center.
  • Golonka, E.M., Bowles, A.R., Frank, V.M., Richardson, D.L. and Freynik, S. 2014. ‘Technologies for foreign language learning: A review of technology types and their effectiveness.’ Computer Assisted Language Learning . 27 (1).
  • Goodwin, K. 2012. Use of Tablet Technology in the Classroom . Strathfield, New South Wales: NSW Curriculum and Learning Innovation Centre.
  • Jung, J., Chan-Olmsted, S., Park, B., and Kim, Y. 2011. 'Factors affecting e-book reader awareness, interest, and intention to use.' New Media & Society . 14 (2)
  • Kenney, L. 2011. ‘Elementary education, there’s an app for that. Communication technology in the elementary school classroom.’ The Elon Journal of Undergraduate Research in Communications . 2 (1).
  • Kopcha, T.J. 2012. ‘Teachers’ perceptions of the barriers to technology integration and practices with technology under situated professional development.’ Computers and Education . 59.
  • Miranda, T., Williams-Rossi, D., Johnson, K., and McKenzie, N. 2011. "Reluctant readers in middle school: Successful engagement with text using the e-reader.' International journal of applied science and technology . 1 (6).
  • Moyo, L. 2009. 'The digital divide: scarcity, inequality and conflict.' Digital Cultures . New York: Open University Press.
  • Newton, D.A. and Dell, A.G. 2011. ‘Mobile devices and students with disabilities: What do best practices tell us?’ Journal of Special Education Technology . 26 (3).
  • Nirvi, S. (2011). ‘Special education pupils find learning tool in iPad applications.’ Education Week . 30 .
  • Norris, P. 2001. Digital Divide: Civic Engagement, Information Poverty, and the Internet Worldwide . Cambridge, USA: Cambridge University Press.
  • Project Tomorrow. 2012. Learning in the 21st century: Mobile devices + social media = personalized learning . Washington, D.C.: Blackboard K-12.
  • Riasati, M.J., Allahyar, N. and Tan, K.E. 2012. ‘Technology in language education: Benefits and barriers.’ Journal of Education and Practice . 3 (5).
  • Rodriquez, C.D., Strnadova, I. and Cumming, T. 2013. ‘Using iPads with students with disabilities: Lessons learned from students, teachers, and parents.’ Intervention in School and Clinic . 49 (4).
  • Sangani, K. 2013. 'BYOD to the classroom.' Engineering & Technology . 3 (8).
  • Servon, L. 2002. Redefining the Digital Divide: Technology, Community and Public Policy . Malden, MA: Blackwell Publishers.
  • Smeets, E. 2005. ‘Does ICT contribute to powerful learning environments in primary education?’ Computers and Education. 44 .
  • Smith, G.E. and Thorne, S. 2007. Differentiating Instruction with Technology in K-5 Classrooms . Eugene, OR: International Society for Technology in Education.
  • Song, Y. 2014. '"Bring your own device (BYOD)" for seamless science inquiry in a primary school.' Computers & Education. 74 .
  • Strayer, J.F. 2012. ‘How learning in an inverted classroom influences cooperation, innovation and task orientation.’ Learning Environment Research. 15.
  • Tamim, R.M., Bernard, R.M., Borokhovski, E., Abrami, P.C. and Schmid, R.F. 2011. ‘What forty years of research says about the impact of technology on learning: A second-order meta-analysis and validation study. Review of Educational Research. 81 (1).
  • Tileston, D.W. 2003. What Every Teacher Should Know about Media and Technology. Thousand Oaks, CA: Corwin Press.
  • Turel, Y.K. and Johnson, T.E. 2012. ‘Teachers’ belief and use of interactive whiteboards for teaching and learning.’ Educational Technology and Society . 15(1).
  • Volman, M., van Eck, E., Heemskerk, I. and Kuiper, E. 2005. ‘New technologies, new differences. Gender and ethnic differences in pupils’ use of ICT in primary and secondary education.’ Computers and Education. 45 .
  • Voogt, J., Knezek, G., Cox, M., Knezek, D. and ten Brummelhuis, A. 2013. ‘Under which conditions does ICT have a positive effect on teaching and learning? A call to action.’ Journal of Computer Assisted Learning. 29 (1).
  • Warschauer, M. and Ames, M. 2010. ‘Can one laptop per child save the world’s poor?’ Journal of International Affairs. 64 (1).
  • Zuker, A.A. and Light, D. 2009. ‘Laptop programs for students.’ Science. 323 (5910).

Related information

  • Information and communication technologies (ICT)

Stanford University

Along with Stanford news and stories, show me:

  • Student information
  • Faculty/Staff information

We want to provide announcements, events, leadership messages and resources that are relevant to you. Your selection is stored in a browser cookie which you can remove at any time using “Clear all personalization” below.

Listen to the essay, as read by Antero Garcia, associate professor in the Graduate School of Education.

As a professor of education and a former public school teacher, I’ve seen digital tools change lives in schools.

I’ve documented the ways mobile technology like phones can transform student engagement in my own classroom.

I’ve explored how digital tools might network powerful civic learning and dialogue for classrooms across the country – elements of education that are crucial for sustaining our democracy today.

And, like everyone, I’ve witnessed digital technologies make schooling safer in the midst of a global pandemic. Zoom and Google Classroom, for instance, allowed many students to attend classrooms virtually during a period when it was not feasible to meet in person.

So I want to tell you that I think technologies are changing education for the better and that we need to invest more in them – but I just can’t.

Given the substantial amount of scholarly time I’ve invested in documenting the life-changing possibilities of digital technologies, it gives me no pleasure to suggest that these tools might be slowly poisoning us. Despite their purported and transformational value, I’ve been wondering if our investment in educational technology might in fact be making our schools worse.

Let me explain.

When I was a classroom teacher, I loved relying on the latest tools to create impressive and immersive experiences for my students. We would utilize technology to create class films, produce social media profiles for the Janie Crawfords, the Holden Caulfields, and other literary characters we studied, and find playful ways to digitally share our understanding of the ideas we studied in our classrooms.

As a teacher, technology was a way to build on students’ interests in pop culture and the world around them. This was exciting to me.

But I’ve continued to understand that the aspects of technology I loved weren’t actually about technology at all – they were about creating authentic learning experiences with young people. At the heart of these digital explorations were my relationships with students and the trust we built together.

“Part of why I’ve grown so skeptical about this current digital revolution is because of how these tools reshape students’ bodies and their relation to the world around them.”

I do see promise in the suite of digital tools that are available in classrooms today. But my research focus on platforms – digital spaces like Amazon, Netflix, and Google that reshape how users interact in online environments – suggests that when we focus on the trees of individual tools, we ignore the larger forest of social and cognitive challenges.

Most people encounter platforms every day in their online social lives. From the few online retail stores where we buy groceries to the small handful of sites that stream our favorite shows and media content, platforms have narrowed how we use the internet today to a small collection of Silicon Valley behemoths. Our social media activities, too, are limited to one or two sites where we check on the updates, photos, and looped videos of friends and loved ones.

These platforms restrict our online and offline lives to a relatively small number of companies and spaces – we communicate with a finite set of tools and consume a set of media that is often algorithmically suggested. This centralization of internet – a trend decades in the making – makes me very uneasy.

From willfully hiding the negative effects of social media use for vulnerable populations to creating tools that reinforce racial bias, today’s platforms are causing harm and sowing disinformation for young people and adults alike. The deluge of difficult ethical and pedagogical questions around these tools are not being broached in any meaningful way in schools – even adults aren’t sure how to manage their online lives.

You might ask, “What does this have to do with education?” Platforms are also a large part of how modern schools operate. From classroom management software to attendance tracking to the online tools that allowed students to meet safely during the pandemic, platforms guide nearly every student interaction in schools today. But districts are utilizing these tools without considering the wider spectrum of changes that they have incurred alongside them.

Antero Garcia, associate professor of education (Image credit: Courtesy Antero Garcia)

For example, it might seem helpful for a school to use a management tool like Classroom Dojo (a digital platform that can offer parents ways to interact with and receive updates from their family’s teacher) or software that tracks student reading and development like Accelerated Reader for day-to-day needs. However, these tools limit what assessment looks like and penalize students based on flawed interpretations of learning.

Another problem with platforms is that they, by necessity, amass large swaths of data. Myriad forms of educational technology exist – from virtual reality headsets to e-readers to the small sensors on student ID cards that can track when students enter schools. And all of this student data is being funneled out of schools and into the virtual black boxes of company databases.

Part of why I’ve grown so skeptical about this current digital revolution is because of how these tools reshape students’ bodies and their relation to the world around them. Young people are not viewed as complete human beings but as boxes checked for attendance, for meeting academic progress metrics, or for confirming their location within a school building. Nearly every action that students perform in schools – whether it’s logging onto devices, accessing buildings, or sharing content through their private online lives – is noticed and recorded. Children in schools have become disembodied from their minds and their hearts. Thus, one of the greatest and implicit lessons that kids learn in schools today is that they must sacrifice their privacy in order to participate in conventional, civic society.

The pandemic has only made the situation worse. At its beginnings, some schools relied on software to track students’ eye movements, ostensibly ensuring that kids were paying attention to the tasks at hand. Similarly, many schools required students to keep their cameras on during class time for similar purposes. These might be seen as in the best interests of students and their academic growth, but such practices are part of a larger (and usually more invisible) process of normalizing surveillance in the lives of youth today.

I am not suggesting that we completely reject all of the tools at our disposal – but I am urging for more caution. Even the seemingly benign resources we might use in our classrooms today come with tradeoffs. Every Wi-Fi-connected, “smart” device utilized in schools is an investment in time, money, and expertise in technology over teachers and the teaching profession.

Our focus on fixing or saving schools via digital tools assumes that the benefits and convenience that these invisible platforms offer are worth it.

But my ongoing exploration of how platforms reduce students to quantifiable data suggests that we are removing the innovation and imagination of students and teachers in the process.

Antero Garcia is associate professor of education in the Graduate School of Education .

In Their Own Words is a collaboration between the Stanford Public Humanities Initiative  and Stanford University Communications.

If you’re a Stanford faculty member (in any discipline or school) who is interested in writing an essay for this series, please reach out to Natalie Jabbar at [email protected] .

Why technology in education must be on our terms

Cameroon school children learning to use computer in classroom

The relationship between technology and education has been a topic of interest for decades. While technology presents remarkable opportunities, it's essential to approach its integration thoughtfully and responsibly. The  2023 Global Education Monitoring (GEM) Report offers valuable insights into how technology has transformed education, its benefits, limitations, and the challenges associated with its implementation.  

The flagship UNESCO report highlights the lack of appropriate governance and regulation, especially amidst rapidly emerging generative artificial intelligence tools. It urges countries to urgently set their own terms for the way technology is designed and used in learning so that it never replaces in-person, teacher-led instruction, and supports quality education for all. Here are some insights from the report. 

What has been the evolution of technology in education?

While the use of technology in education dates back to the emergence of radio in the 1920s, it's the digital technology of the last 40 years that holds the greatest potential for educational transformation. This period has witnessed a revolution in content distribution, learning management systems, testing methods, and language instruction. From augmented reality to personalized tutoring, technology has reshaped our learning experiences. Recent advancements in artificial intelligence have amplified the capabilities of educational technology, even raising questions about the role of human interaction in education.

What is the impact of technology on learning?

Technology undeniably enhances learning in specific contexts. However, it is crucial to recognize that a one-size-fits-all approach does not apply. Digital technology's primary contributions to learning lie in its ability to personalize instruction and extend available learning time. Additionally, it fosters engagement by encouraging interaction and collaboration among learners. Notably, the report highlights that technology need not be cutting-edge to be effective. For instance, in China, providing high-quality lesson recordings to rural students resulted in a 32% improvement in outcomes and a 38% reduction in urban-rural learning gaps.

How do we evaluate technology's effectiveness in education?

The report emphasizes that evaluating technology's impact must focus on learning outcomes rather than the mere implementation of digital tools. Cases such as Peru, where laptops were distributed without integrating them into pedagogy, demonstrate that technology alone doesn't guarantee improved learning. Similarly, exclusive reliance on remote instruction in the United States widened learning gaps. The report further warns against inappropriate or excessive technology use, citing instances of negative links between excessive ICT use and student performance.

How reliable is the evidence?

The rapid evolution of technology often outpaces its evaluation. Evidence primarily comes from affluent countries, raising concerns about generalizability. The report reveals that a mere 7% of education technology companies in the United Kingdom conducted randomized controlled trials, reflecting a lack of rigorous evaluation. The challenge of isolating technology's impact from other factors complicates precise assessment. Additionally, the influence of technology companies on evidence generation poses credibility challenges.

What are the recommendations for effective integration of technology in education?

As artificial intelligence gains prominence, the report emphasizes that not all technological change equates to progress. The adoption of technology must be guided by a learner-centric, rights-based framework, ensuring appropriateness, equity, evidence-based decisions, and sustainability. The report presents a four-point compass for policy-makers:

  • Look down: Evaluate the context and learning objectives to ensure technology choices strengthen education systems.
  • Look back: Prioritize marginalized groups to ensure that technology benefits all learners and narrows educational disparities.
  • Look up: Ensure evidence-based decision-making and consider hidden long-term costs before scaling up technology initiatives.
  • Look forward: Align technology integration with sustainable development goals, considering financial implications, children's well-being, and environmental impact.

Technology in education: A tool on whose terms

Technology in education: A tool on whose terms

From 4 to 7 September, UNESCO's  Digital Learning Week will gather policy-makers, practitioners, educators, private sector partners, researchers and development agencies to jointly explore how public digital learning platforms and generative AI can be steered to reinforce and enrich human-centered quality education.

  • Download the  2023 GEM Report  
  • Read the  press release  
  • Join the conversation on social media via  #TechOnOurTerms
  • More on the  Global Education Monitoring Report
  • More on UNESCO's  Digital Learning Week

Related items

  • Artificial intelligence
  • Educational technology
  • Digital learning week
  • Topics: Display
  • See more add

More on this subject

2025 Global Forum on AI and Digital Transformation in the Public Sector

Other recent articles

Four questions on UNESCO’s English-Kiswahili AI dictionary initiative

Article ChatGPT, artificial intelligence and higher education: What do higher education institutions need to know? 26 July 2024

The comic strip Inside AI: An Algorithmic Adventure released in Malagasy

conclusion about technology in education

My Subscriptions

Ideas of India

AI As A Teaching Assistant: How Schools Can Leverage Technology To Enhance Classrooms

Ai is transforming education by personalising learning, supporting educators, and enhancing educational experiences, but ethical use, data privacy, and ongoing training are essential for effective integration..

AI As A Teaching Assistant: How Schools Can Leverage Technology To Enhance Classrooms AI As A Teaching Assistant: How Schools Can Leverage Technology To Enhance Classrooms

By Mr Davi Netto

The advent of Artificial Intelligence (AI) in education heralds a transformative era, promising unparalleled opportunities for innovation and improvement. AI is being integrated into various educational settings, from K-12 to higher education, and in online learning platforms. AI has become an increasingly important tool in modern education as technology advances.

Everyone clearly understands that AI will play a pivotal role in education. However, it is essential to understand the profound impact of AI across the spectrum. Educating faculty members on the potential and challenges AI may pose in the education landscape is paramount. To achieve this goal, we should undertake comprehensive internal surveys to assess AI's understanding, readiness, and adoption across our schools and their different grade levels.

Impact on Learners

AI holds immense potential to tailor education to meet the diverse needs of learners, making learning more effective and engaging for everyone. By analyzing individual learner data, AI can identify preferred learning styles, strengths, and areas for improvement. This capability enables the creation of personalized learning paths that adjust content difficulty, lesson pacing, and instructional methods according to each learner's needs. For instance, AI can scaffold, modify lesson complexity or provide additional resources based on learner performance and preferences, ensuring a more customized learning experience.

AI-powered educational tools offer real-time feedback and suggestions. These tools provide instant explanations and practice exercises tailored to the learner's current level of understanding, allowing learners to progress at their own pace. By adapting to each learner's needs, AI ensures that support always aligns with their learning journey.

Enriching Educational Experiences

Moreover, AI can enrich educational experiences by integrating advanced technologies like speech-to-text, text-to-speech, and virtual reality (VR). These tools cater to various learning preferences—auditory, visual, or kinesthetic—making complex concepts more accessible to all learners. This technological integration helps accommodate different learning styles and fosters a more inclusive educational environment.

Early Detection of Academic Difficulties

AI systems also play a crucial role in the early detection of academic difficulties. By identifying patterns that indicate a learner might be struggling or at risk of falling behind, AI enables educators to intervene promptly. This proactive approach allows for targeted support and resources, helping learners overcome challenges before those become significant barriers. AI enables a shift from one-size-fits-all teaching to personalized learning experiences.

Empowering Faculty with AI

AI tools assist educators by automating administrative tasks, such as grading and scheduling, allowing them to focus more on teaching. These tools also offer insights into learner performance, helping teachers refine their methods and provide targeted support when needed.

Ongoing Training and Support

Ongoing training for educators is crucial to integrate AI into the classroom effectively. Professional development programs should cover AI principles, ethical use, and the latest advancements, ensuring that teachers can guide learners in using AI responsibly and establish clear guidelines for its use. Monitoring systems help ensure that AI-generated content is used appropriately and that academic integrity is maintained. This includes using plagiarism detection tools and setting policies to educate and prevent misuse.

By analyzing individual learner data, AI supports the development of dynamic curricula that can adapt to the latest advancements in various fields. This includes integrating AI concepts into existing subjects and creating new educational pathways that reflect current and future technological landscapes.

Addressing Challenges

Integrating AI into existing educational frameworks can be complex. Educators should understand how to use AI tools effectively to integrate them into their teaching practices. This often requires substantial professional development and ongoing support.

Managing learner data responsibly is crucial when using AI. Ensuring AI systems comply with data protection regulations and safeguarding learners personal information can be challenging. The use of AI in education raises ethical concerns, such as potential biases in AI algorithms and ensuring transparency in how AI systems make decisions. Educators must navigate these issues to maintain fairness and trust in the educational process. Limited resources can hinder the classroom's adoption and effective use of AI technologies. Providing comprehensive professional development and support is essential so educators can leverage AI effectively to enhance teaching and improve learning.

In preparing learners for an AI-driven world, educators must introduce AI concepts early in the educational journey. This foundational knowledge should aim to help learners understand AI's applications across various industries and equip them with the skills needed to thrive in a technologically advanced future. Emphasizing critical thinking, creativity, and ethical understanding is essential. These skills will enable learners to use AI tools effectively and make informed decisions about their applications.

Additionally, teaching learners about the ethical implications of AI, including data privacy, bias, and transparency, ensures that they use AI responsibly and understand its societal impact. Educators also need ongoing professional development to stay current with AI advancements and best practices. Training programs should focus on integrating AI into teaching, supervising AI use, and maintaining academic integrity. By leveraging AI to enhance curriculum development and focusing on comprehensive AI literacy, educators can prepare learners to excel in an increasingly AI-driven world.

(The Author is the Principal of JBCN International School, Parel)

[ Disclaimer : The opinions, beliefs, and views expressed by the various authors and forum participants on this website are personal and do not reflect the opinions, beliefs, and views of ABP News Network Pvt Ltd.]

Education Loan Information: Calculate Education Loan EMI

Accu Weather

Top Headlines

Ground Report: Poverty Alleviation, Taming Prices, Jobs Tower Over Art. 370 Restoration Demand In J&K Polls

Trending News

ABP Premium

Photo Gallery

When Dr S Radhakrishnan Met John F Kennedy — US Embassy Shares Images From 1963 Visit

Trending Opinion

Rajkumar Varier

Personal Corner

Ground Report: Poverty Alleviation, Taming Prices, Jobs Tower Over Art. 370 Restoration Demand In J&K Polls

  • Research article
  • Open access
  • Published: 23 September 2024

Social comparison feedback in online teacher training and its impact on asynchronous collaboration

  • Yao Lu   ORCID: orcid.org/0000-0002-5510-1402 1 ,
  • Ning Ma   ORCID: orcid.org/0000-0002-1941-724X 1 , 2 &
  • Wen-Yu Yan 1  

International Journal of Educational Technology in Higher Education volume  21 , Article number:  55 ( 2024 ) Cite this article

136 Accesses

2 Altmetric

Metrics details

In the area of online teacher training, asynchronous collaboration faces several challenges such as limited learner engagement and low interaction quality, thereby hindering its overall effectiveness. Drawing on social comparison theory, providing social comparison feedback to teacher-learners in online asynchronous collaborative learning offers benefits, but also has drawbacks. While social comparison has been explored in diverse fields, its role in education remains unclear. In this study, we selected 95 primary and secondary school teachers participating in an online training course. Using randomized controlled trial design, we provided the experimental group with social comparison feedback, while the control group received only self-referential feedback. We used epistemic network analysis, lag sequential analysis, and social network analysis to identify the impact of social comparison feedback on group-regulated focus, group-interactive behaviors, and social network structures. The results showed that social comparison feedback significantly enhanced teachers’ online asynchronous collaborative learning.

Introduction

There is a global emphasis on enhancing the professional competencies of in-service teachers (Depaepe & König, 2018 ). The rise of online teacher training, driven by advancements in information technology, has been recognized for its effectiveness in helping teachers acquire new skills and improve their professional practices (Kalinowski et al., 2020 ). The transition from face-to-face training to online platforms has significantly elevated the quality of teacher training (Ma et al., 2022a , 2022b ). Unlike traditional face-to-face training, online training offers flexibility, allowing educators to learn at their own pace and on their own schedule (Prestridge, 2016 ). This flexibility is crucial for overcoming time and geographical constraints and is further enhanced by the availability of online professional learning communities (Kalinowski et al., 2020 ), which increase accessibility and foster deeper engagement in professional development.

A key component of online training is asynchronous interaction (Frey & Alman, 2003 ), typically manifested as online asynchronous collaboration. This mode of interaction, which does not require real-time communication, provides flexibility that enhances engagement, collaboration, and inspiration within online learning environments (Burns et al., 2022 ). It also improves learners’ engagement, participation, and higher-order thinking skills (Bailey et al., 2020 ; Li et al., 2018 ). However, the delayed nature of asynchronous communication can lead to extended response times, potentially reducing training efficiency (Kim et al., 2015 ). Therefore, leveraging learning analytics to help educators understand learners’ behaviors and performance, as well as to provide timely and adaptive feedback and support, is crucial for optimizing online learning (Banihashem et al., 2024 ).

Social comparison, as defined by Festinger ( 1954 ), involves individuals assessing their abilities and behaviors against those of others. It is a common method for self-evaluation and self-assessment (Fam et al., 2020 ), with significant implications in fields such as psychology and medicine (Baldwin & Mussweiler, 2018 ; Corcoran et al., 2020 ). In educational contexts, students often engage in subconscious social comparisons, evaluating aspects such as academic performance, physical appearance, and athletic skills (Fleur te al., 2023 ). These comparisons can offer insights into peer perceptions, thereby motivating learners (Chen & Chen, 2023 ) and encouraging them to match their peers’ achievements, leading to improved cognitive engagement and learning outcomes (Wambsganss et al., 2022 ). While social comparison can be beneficial in psychology and health (Appel et al., 2015 ; Han et al., 2020 ; Verduyn et al., 2020 ), it can also induce anxiety, potentially hindering learning (Bai et al., 2021 ).

Despite the recognized benefits and potential pitfalls of social comparison in educational contexts, significant research gaps remain. First, previous studies have primarily focused on the impact of social comparison on individuals (Bai et al., 2021 ; Delava et al., 2017 ; Kollöffel & Jong, 2016 ), with no known research on the design and implementation of social comparison feedback in online collaborative environments. Second, prior research has focused mainly on learning performance, which neglecting the effects of social comparison on group dynamics, such as group-regulated learning and social network structures. This oversight has limited our comprehensive understanding of social comparison feedback. Finally, to our knowledge, only two studies have explored the differences between social comparison and self-reference (Delava et al., 2017 ; Kollöffel & Jong, 2016 ), and both focused on individuals without investigating these differences in online collaborative environments.

Therefore, this study aims to address these research gaps by integrating social comparison feedback into online asynchronous collaboration. We used a randomized controlled trial design involving in-service teachers to investigate the effects of this integration, examining how social comparison feedback influences group-regulated focus, interactive group behavior, and social network structure among teacher-learners participating in online collaborative learning environments.

Literature review

Asynchronous collaborative learning in online teacher training.

Online teacher training has become a crucial means of professional development (Ma et al., 2022a , 2022b ), offering benefits such as flexibility in schedule and location, access to diverse learning resources, and the ability of learners to progress at their own pace. Additionally, a significant advantage of online platforms is their capability to integrate various instructional supports tailored to specific courses and learner needs, which facilitates effective interactions (Gao et al., 2024 ). A key feature of this mode of training is online asynchronous collaboration (Ma et al., 2023 ), where teacher-learners work together to understand course materials and co-construct new knowledge (Liu et al., 2021 ). This collaboration typically takes place through interactive boards, forums, and assignment review areas, accommodating participants who cannot meet in person for various reasons.

Asynchronous collaborative learning provides more time for reflection and deliberation compared to synchronous interactions (Lin & Sun, 2024 ). This extended time fosters deeper thinking and more thoughtful communication in learners. Previous research indicates that synchronous collaborative learning enhances problem-solving abilities (Hendarwati et al., 2021 ), boosts critical thinking (Oh et al., 2018 ), and promotes group knowledge construction (Yang et al., 2020 ).

However, online asynchronous collaboration presents several challenges. Merely participating in online collaborative learning does not guarantee effective learning or successful completion of collaborative tasks (Chejara et al., 2024 ). First, the lack of continuity often extends the timeline (Zhou et al., 2015 ), leading to disjointed discussions and potential deviations from key training content (Guan et al., 2006 ). Second, the delayed inherent of online asynchronous collaboration can lead to less timely interactions among teacher-learners, fostering a sense of isolation and reducing motivation to learn (Kaufmann & Vallade, 2020 ).

Therefore, to enhance the effectiveness of online teacher training, designing effective learning support strategies is essential. Feedback, as a vital component of asynchronous learning environments, can offer valuable insights in the absence of real-time interactions, helping learners identify issues that may be difficult to recognize on their own (Cui & Schunn, 2024 ; Shea & Bidjerano, 2010 ).

  • Social comparison feedback

Social comparison, a concept introduced by Festinger ( 1954 ), involves individuals evaluating their own opinions and abilities by comparing themselves with others. This comparison serves as a mechanism for accurate self-evaluation (Festinger, 1954 ), enabling individuals to gauge their abilities, behaviors, and performance levels by contrasting them with those of their peers in similar situations. Compared to absolutist approaches, social comparison is a more efficient method of information processing, enabling self-assessment with reduced cognitive effort (Mussweiler & Epstude, 2009 ) and addressing the needs for self-assessment, self-improvement, and self-enhancement (Dijkstra et al., 2008 ), even when objective standards are present.

Social comparison can be categorized into three types based on various theoretical models and perspectives: upward comparison, parallel comparison, and downward comparison. Upward social comparison occurs when individuals compare themselves with those at a higher level, which can help them identify their shortcomings (Park et al., 2021 ). Parallel social comparison involves comparing oneself with others of similar abilities or opinions (Festinger, 1954 ), while downward social comparison involves comparing oneself with those in less favorable situations, aiming to maintain a positive self-image and enhance satisfaction, self-esteem, and self-evaluation (Kong et al., 2021 ).

Social comparison feedback is believed to aid learners in learning from their peers and identifying learning gaps. Neugebauer et al. ( 2016 ) noted that learners prone to social comparison are better at extracting useful information from high-performing peers. Previous studies also suggest that this feedback can improve online learning performance (Joksimovic et al., 2015 ) and self-efficacy (Flener-Lovit et al., 2020 ). By providing social comparison feedback, we aimed to offer guidance and encourage active engagement in online asynchronous collaborative learning. However, concerns have also been raised about anxiety induced by social comparison and its impact on effective learning (Ray et al., 2017 ).

Therefore, in this study, we examined the influence of social comparison feedback on teacher-learners in online asynchronous collaboration.

Factors influencing online collaborative learning

Regulated focus, which is essential for the success of online learning, is positively associated with collaborative learning performance (Carter et al., 2020 ; Zheng et al., 2019 ). Research by Rogat and Adams-Wiggins ( 2015 ) on seventh-grade students working in groups on science tasks revealed that effective regulation significantly enhanced team performance. Given the dynamic nature of collaborative learning, traditional tools often fall short in understanding and facilitating this process. Epistemic Network Analysis (ENA), which conceptualizes learning as the development of a cognitive framework that integrates knowledge and competencies, proves effective in tracking the dynamics of these regulatory processes (Lu et al., 2023 ; Shaffer et al., 2016 ). The growing popularity of ENA in research underscores its value for investigating regulated focus in group learning.

Group behavioral sequences are also vital in collaborative learning. Studies by Yang ( 2023 ) and Tlili et al. ( 2023 ) have examined the sequential progression of group behaviors, and techniques such as lag sequential analysis (LSA) can be employed to identify significant relationships between behaviors and uncover patterns (Berk et al., 1997 ).

Social interaction is crucial for online collaborative learning. Social network analysis, as utilized by researchers like Xie et al. ( 2018 ), plays a key role in understanding changes within learning community structures. This method employs nodes to represent entities and edges to denote relationships, thereby revealing participants’ roles in collaborative activities. Calvani et al. ( 2010 ) identified three essential metrics for assessing effective network interactions: participation, cohesion, and synthesis. Additionally, Zheng et al. ( 2021 ) recommended indicators like per capita postings and entry degree centrality for analyzing small social networks consisting of three to five individuals.

Aims and research questions

Grounded in the four dimensions of the learning engagement framework proposed by Fredricks et al. ( 2004 ), this study focused on the application of social comparison feedback within online asynchronous collaborative learning groups for teacher-learners. The primary objective was to evaluate the overall impact of this feedback on collaborative learning processes from multiple perspectives. Thus, the study aimed to answer the following research questions:

How does social comparison feedback influence the regulated focus within learning groups?

How does social comparison feedback influence collaborative interactive behavior within these learning groups?

How does social comparison feedback influence the social network structure within these groups?

Research context

The Project-Based Learning (PBL) Design in Action course examined in this study is a free public-service program aimed at enhancing the theoretical knowledge and practical skills of in-service teachers in PBL design. By engaging teachers in PBL design projects, the course helps them develop the ability to design effective PBL curricula. The course is structured around four key topics: PBL topic selection, setting learning objectives and plans, supporting the learning process, and assessing learning outcomes. Hosted on the EPBL platform, this online program is available to primary and secondary school teachers throughout China.

Before the course began, an online live session was organized for all participants. This session provided detailed demonstrations of platform navigation and features, along with an overview of the course structure. During this session, participants were randomly assigned to small groups and given access to a dedicated discussion area on the EPBL platform. This discussion area was also intended for collaborative activities once the course officially started, promoting group formation and mutual understanding.

Before the course officially began, an introductory phase was initiated where participants interacted briefly within their groups in the online discussion area. They introduced themselves and voluntarily shared information such as their names, regions, schools, subjects taught, and grade levels. This preliminary interaction was crucial for several reasons: it helped participants familiarize themselves with the discussion area, established initial connections among group members, and ensured that everyone was comfortable using the platform before course content was delivered. During this introductory phase, two teaching assistants were available online to assist with any technical issues.

Once the initial interactions were completed, the course proceeded through the four key topics. Each topic included multiple learning videos, materials, and collaborative activities. Participants engaged in these online collaborative activities within the discussion areas of their small groups and submitted assignments related to each topic. All activities were conducted online. To ensure effective progress in collaborative learning, insights from scholars such as Kawai ( 2006 ) and Biesenbach-Lucas ( 2004 ) on asynchronous collaborative learning were incorporated. The approach was tailored to the unique characteristics of teacher-learners and the course, with appropriate support mechanisms implemented to promote interdependence among group members. The course facilitated the gradual completion of PBL design projects through guided group collaboration on various topics. Upon completing the tasks for the four key topics, each group produced a comprehensive PBL design project.

Figure  1 illustrates the collaboration interface among different groups within specific topic discussion areas. Participants had the option to use filter buttons to view comments from other participants before posting their own comments. The posting box included a range of editing tools—such as text, graphics, and tables—that allowed participants to refine their comments. After engaging in online collaboration, groups were required to submit assignments related to each topic. These assignments were reviewed by teaching assistants, who then provided feedback.

figure 1

Interface for collaborative discussions within groups

Participants

At the beginning of the experiment, a randomized controlled trial (RCT) design was employed to ensure that participants were randomly assigned to either the experimental group or the control group. This method minimized selection biases by randomly assigning participants to treatment and control conditions, thus providing robust evidence for research methodologies (Gegenfurtner & Ebner, 2019 ). The use of RCTs was well-established in educational research, as demonstrated in studies such as those by Merk et al. ( 2020 ) and Schenke et al. ( 2020 ).

Initially, 109 teachers registered for the course, with 53 in the experimental group and 56 in the control group. As our study focused on asynchronous collaborative learning, we included the 95 participants who completed at least one discussion thread: 49 in the experimental group and 46 in the control group. As a result, all 95 participants engaged in at least one discussion thread during the course, with no participants failing to do so. This ensured there was no difference in retention rates between the experimental and control groups.

Individual variables such as educational level and gender could influence online peer feedback (Noroozi et al., 2024 ). To ensure comparability between the experimental and control groups, we conducted descriptive statistics on participants’ gender, teaching experience, and grade level taught. Table 1 presents these characteristics, indicating that the distributions were roughly equivalent between the two groups. Specifically, the gender distribution was similar in both groups, with approximately 35% male and 65% female participants. In terms of teaching experience, the majority had 1–5 years of experience (about 46%), followed by those with over 16 years (about 19%), and pre-service teachers comprised about 16%. Regarding the grade level taught, approximately half of the participants were primary school teachers, while the remaining participants taught at middle school or high school levels, or were pre-service teachers. Most participants had prior experience with online learning and were enthusiastic about engaging in collaborative online activities and interacting with both experts and peers.

Construction of social comparison feedback

Mussweiler ( 2003 ) identified three stages in the process of social comparison: standard selection, comparison with the target, and evaluation. In this study, social comparison was defined as the process in which participants are presented with comparative information about their peers within specific standards, which is valuable for their self-evaluation.

During online collaboration, each group engaged in discussions and submitted assignments on various topics. These assignments were evaluated and ranked by teaching assistants in descending order. Groups were categorized into three levels: the top one-third were labeled “excellent,” the middle one-third “qualified,” and the bottom one-third “developing”. Participants could choose a comparison category for their group, enabling various types of social comparisons: upward, downward, or parallel. For example, comparing a “developing” group with an “excellent” group constituted an upward comparison, comparing an “excellent” group with another “excellent” group constituted a parallel comparison, and comparing a “qualified” group with a “developing” group constituted a downward comparison.

Social comparison feedback was provided to participants through a combination of visual representations and textual descriptions. The tone of the feedback became increasingly positive as the ranking of the participant’s group rose. Participants received specific comparisons between their own group and other selected groups across four dimensions: behavior, cognition, interaction, and emotion. This process is illustrated in Fig.  2 .

Behavioral Dimension: Data in this dimension were derived from behavioral indicators observed on the learning platform, such as study duration, the number of quizzes taken, and the quality of quizzes submitted by each group. Participants received a comparison of these behavioral indicators between their own group and the selected group.

Cognitive Dimension: Data for this dimension were sourced from interactional texts in the discussion areas of each group. High-frequency words and topics from the selected group were identified using Term Frequency-Inverse Document Frequency (TF-IDF) analysis and Latent Dirichlet Allocation (LDA) topic modeling, with the optimal number of topics determined by topic perplexity (Blei, 2000 ). Participants were then shown the high-frequency words and topics from the selected group.

Interactive Dimension: Data in this dimension were gathered from interactional information in each group’s discussion area. Four types of indicators of small social network interactions were analyzed: interaction density, interaction centrality, interaction cohesion, and interaction balance (Zheng et al., 2021 ). Participants were presented with social network diagrams and comparisons of these indicators between their own group and the selected group.

Emotional Dimension: Data for this dimension came from self-reports by group members, using Artino’s adapted academic emotion questionnaire (Artino and Jones, 2012 ) to assess the emotional states during collaborative learning. Participants received comparisons of emotional states between their own group and the selected group.

figure 2

Social comparison feedback process

After the initial comparison with the selected group, each student had the option to compare their performance with that of other groups.

Study design

The course involved each group designing a feasible project-based learning (PBL) plan. The design process was segmented into a series of tasks related to each topic. Groups were required to collaboratively discuss each topic and submit assignments related to the PBL plan for that topic. At the end of each topic, both the experimental and control groups received feedback. The experimental group received social comparison feedback, while the control group received self-referential feedback. The fundamental differences between these types of feedback are outlined in Table  2 .

As noted previously, the comparison between the experimental and control groups focused on comparison options, overview guidelines, the learning information of the subject group, and the learning information of the other groups. The specific differences are outlined in Table  3 :

Comparison Options: The experimental group received feedback that included comparisons with other groups, allowing participants to dynamically choose among excellent, qualified, or developing groups. In contrast, the control group received feedback solely on their own learning process relative to the completed topic, without any comparative information.

Overview Guidelines: The experimental group was provided with social comparison guidelines that varied based on the group type chosen for comparison (excellent, qualified, or developing). For instance, if a participant’s group was ranked as developing and they chose an excellent group for comparison, they received upward social comparison feedback, such as “Room for improvement!” along with related guidelines.

Learning Information: Regarding the behavioral, cognitive, interaction, and emotional dimensions, the experimental group could compare its performance with any type of target group (excellent, qualified, or developing) and received comparative guidelines and visuals. Conversely, the control group could only access information about its own performance in these dimensions.

This study employed a randomized controlled trial to explore the impact of social comparison feedback on online collaboration. The experimental group (n = 49, including 11 small groups) received social comparison feedback, while the control group (n = 46, including 11 small groups) received self-referential feedback. The experiment lasted for 16 days, and the study design is illustrated in Fig.  3 .

figure 3

Stage 1 involved preparation, including the development of course content and the recruitment of participants.

Stage 2 encompassed the randomization process, during which 95 learners were randomly assigned to 22 groups, each consisting of 4–5 members. Of these groups, 11 were designated as the experimental group and 11 as the control group.

Stage 3 saw participants engaging in online asynchronous collaboration. The course comprised four topics, each introduced sequentially according to the course schedule, with each topic lasting 4 days. During the open period for each topic, learners were required to study the corresponding course materials and participate in online asynchronous collaborative learning. They could choose to learn and discuss at any time within the open period for each topic. Before the deadline for each topic, groups were required to submit their assignments. Following submission, they received feedback related to the topic, with the experimental group receiving social comparison feedback and the control group receiving self-referential feedback.

Coding scheme

Learning regulation focus coding scheme.

In this study, the comment data were analyzed using the online collaborative learning regulation focus coding scheme developed by Zhang et al. ( 2021 ), which has demonstrated strong validity and reliability. This coding scheme classified the groups’ regulation focus into three main dimensions: task, emotion, and organization, each comprising several sub-dimensions.

Comments in the task dimension were further categorized into task understanding (Task), content monitoring (ConMo), and process monitoring (ProMo). Task referred to the extent of understanding of the learning tasks, ConMo involved tracking the accuracy and relevance of the discussed content, and ProMo pertained to overseeing the learning methods and strategies used.

Comments in the emotion dimension were classified into positive emotion (Pos), negative emotion (Neg), and joking (Joke). Pos denoted expressions of approval or appreciation for the content posted by others, while Neg referred to expressions of disapproval or dissatisfaction. Joke indicated emotional content unrelated to the learning material, such as humor or unrelated banter.

Comments in the organization dimension included comments related to organizing (Org), which indicated activities focused on structuring or arranging the learning process within the group.

Interaction behavior coding scheme

Based on Gunawardena’s et al. ( 1997 ) interaction analysis model and subsequent research by scholars such as Hou and Wu ( 2011 ), Wang et al. ( 2020 ) developed a verb-driven interaction behavior coding scheme. This scheme prioritized learners’ communication and interaction rather than focusing solely on constructing advanced social knowledge. In this study, the scheme was adapted to reflect the nuances of online asynchronous collaboration, as shown in Table  4 . The adapted coding scheme was used to analyze and code the groups’ discussion behaviors.

Knowledge construction level coding scheme

In this study, we utilized Gunawardena’s et al. ( 1997 ) interaction analysis model and its modified versions, as this model is widely used for content analysis of online discussions. Gunawardena’s model classified a group’s knowledge construction into five stages, reflecting increasing depth of knowledge construction and interaction quality. The five stages were: sharing/comparing of information, discovery of dissonance and inconsistency, negotiation of meaning/co-construction of knowledge, testing and modification of the proposed synthesis, and agreement/application of newly constructed meaning. This model was used to analyze and quantify the interaction data to assess the degree of knowledge construction achieved by the groups.

Data analysis

To investigate the influence of social comparison feedback on asynchronous collaboration, we analyzed three dimensions: regulation focus, interaction behavior, and social network structure. These dimensions provided a comprehensive understanding of how social comparison feedback influenced the learning process, group interaction dynamics, and social relationships among learners engaged in online collaboration.

Regarding Research Question 1, which concerned the regulation of the learning process within groups, we invited two experts in project-based learning to code all discussion posts using the group regulation focus coding scheme described in Sect. ” Learning regulation focus coding scheme ”. Before coding, the experts received relevant training and independently coded 15% of the selected posts. Their coding demonstrated a high level of reliability, with a coefficient of 0.85 (Fleiss, 2003 ). The experts discussed any discrepancies to ensure consistency and then independently coded the participants’ discussion data. We subsequently conducted Epistemic Network Analysis (ENA) on the experimental and control groups based on the coded data. ENA used coded qualitative data from interactions, such as discussions, to construct networks. Specifically, ENA identified, quantified, and visualized the connection structures between design nodes by analyzing the co-occurrence of cognitive nodes (Shaffer et al., 2016 ). In the visualized graph, each node represented a predefined cognitive element, and the edges indicated the co-occurrence between these elements. The thickness of the edges reflected the relative strength of the connection between two nodes. The network was mapped onto a two-dimensional space, with the X and Y axes helping to distinguish the connection patterns between nodes. Nodes that were close together indicated frequent co-occurrence in similar interaction contexts. Additionally, ENA created subtraction networks to identify the most significant differences between the two networks. By comparing the subtraction networks of the experimental and control groups within the epistemic network space, we determined the influence of social comparison feedback on group-regulation focus.

To analyze group interaction behavior, we used the interaction behavior coding scheme adapted by Wang et al. ( 2020 ) (see Sect. ” Interaction behavior coding scheme ”) and invited two experts to code all interaction behavior data of the learners. Before coding, they underwent relevant training and independently coded 15% of the selected posts. Their coding demonstrated a high level of reliability, with a coefficient of 0.94 (Fleiss, 2003). After reaching a consensus through discussion, the experts independently coded the participants’ interaction behavior data. We then conducted lag sequential analysis (LSA) using GSEQ software 5.0. LSA calculated the probabilities of transitions between different behaviors to identify patterns and dependencies, generating transition diagrams that displayed the likelihood of moving from one behavior to another. If the Z -score for a particular behavior sequence exceeded 1.96, it indicated that the sequence was statistically significant ( p  < 0.05). These significant behavior sequences revealed the behavior patterns of the experimental and control groups during online asynchronous collaborative learning. Through these analyses, we gained a better understanding of the impact of social comparison feedback on group interaction behavior.

Regarding social network structure, group members interacted during online asynchronous collaboration by posting and replying to messages. By considering all members of the group as network nodes, with posts representing “out-degree” connections to other group members and replies representing “in-degree” connections for specific participants, directed social networks were formed within each group. We analyzed the social networks of both the experimental and control groups using the dimensions of interaction intensity, interaction balance, and interaction quality to measure the effects of social comparison feedback on group social network relationships.

Group-regulated focus

To further explore the differences in group-regulated focus between the experimental and control groups, we plotted the subtracted epistemic network shown in Fig.  4 . Each student in the experimental group was represented by a red dot, while each student in the control group was represented by a blue dot. The blue and red squares represented the average centroids of the experimental and control groups, respectively, and the dashed boxes around the squares indicated the 95% confidence interval. Nodes in the ENA network represented each code (e.g., Task, ConMo), and the connections between nodes represented associations. The thickness of the lines between two nodes indicated the strength of the connection.

figure 4

The subtracted epistemic network depicting group-regulated focus in the experimental group (blue) and the control group (red)

The epistemic network generated from the coding data exhibited explanatory strengths of 18.2% along the x-axis and 24.6% along the y-axis. Given the non-normal distribution of the data, we conducted a Mann–Whitney U test on the projection points, which yielded significant results. Specifically, we observed a significant difference between the two groups along the x-axis ( U  = 720.00, p  = 0.00, r  = 0.44), while no significant difference was detected along the y-axis ( U  = 1311.00, p  = 0.90).

For the task dimension (codes: Task, ConMo, and ProMo), where Task represented task understanding, ConMo represented content monitoring, and ProMo represented process monitoring, the distribution of task-related codes along the x-axis was concentrated in the experimental group. This indicated that task-related aspects were more emphasized in this group. In the experimental group, the connections between Task (task understanding), ConMo (content monitoring), and ProMo (process monitoring) illustrated the collaborative problem-solving process: learners engaged in discussions about content monitoring (ConMo) based on their task understanding (Task), iteratively refining and developing their cognitive goals. Additionally, the strong connection between ConMo and ProMo in the experimental group (ConMo—ProMo of the experimental group: 0.44) suggested that participants in this group engaged in more process monitoring (ProMo) during content monitoring (ConMo) compared to the control group (ConMo—ProMo of the control group: 0.38).

Regarding the emotion dimension (codes: Neg, Pos, and Joke), where Neg represented negative emotions, Pos represented positive emotions, and Joke represented joking, two main observations emerged. First, in terms of node positions, negative emotions (Neg) and joking (Joke) were more prominent in the experimental group on the x-axis, while positive emotions (Pos) were more prominent in the control group. Second, in terms of connection strength, the experimental group’s negative emotions (Neg) and joking (Joke) had weaker connections with other nodes, indicated by faint blue connection lines. In contrast, the control group’s positive emotions (Pos) had stronger connections with content monitoring (ConMo): Pos-Task (experimental group: 0.05; control group: 0.12), Pos-ConMo (experimental group: 0.35; control group: 0.50), Pos-ProMo (experimental group: 0.07; control group: 0.18). This indicated that the control group frequently exhibited positive emotions (Pos) during content discussions (ConMo).

Regarding the organization dimension (code: Org), the results from the Mann–Whitney U test indicated differences between the experimental and control groups along the x-axis, with the organization code being closer to the experimental group on this axis. Additionally, the organization code in the experimental group showed a closer connection to task understanding (Task): Org-Task (experimental group: 0.12; control group: 0.07). This suggested that the experimental group likely organized their content more comprehensively than the control group, with this more structured approach to content organization contributing to a better understanding of the task.

Group interaction behaviors

The behavior sequence transition diagrams for the experimental and control groups, based on the residual tables of behavior sequences, are shown in Fig.  5 . In Fig.  5 , nodes represented different types of interaction behaviors, while the connecting lines between the nodes indicated significant behavior sequences. The arrows illustrated the order of transition between two behaviors, and the numbers above the arrows ( Z -scores) indicated the significance level of the behavior sequences. To visually emphasize these differences, the thickness of the arrows was proportional to the significance level, with higher values represented by thicker arrows.

figure 5

Behavioral sequence transition diagram for the experimental and control groups

Three notable differences between the experimental and control groups emerged in relation to the behavioral sequence transitions.

(a) In the yellow section (Line 1): The experimental group exhibited a distinct offer – > negotiate sequence, indicating that participants in this group engaged in more negotiation behavior after receiving information. This behavior reflected deeper consideration of the collaborative content, including negotiations over differing viewpoints or alternatives. Conversely, the control group displayed a pronounced offer – > support sequence, where support denoted agreement with others’ opinions. This suggested that participants in the control group were more inclined to endorse their peers’ opinions.

Additionally, the ask – > respond sequence was observed in both groups but was more pronounced in the experimental group ( Z 1  = 12.52 >  Z 2  = 5.33). This indicated that participants in the experimental group were more likely to receive responses to their questions. However, it is important to acknowledge that Lag Sequential Analysis does not directly test for significant differences in link strength across conditions. Therefore, this inference about stronger connections in the experimental group should be interpreted with caution.

(b) In the blue section (Line 2): The experimental group demonstrated greater engagement in monitoring behaviors within their summaries. Monitoring actions signified control over the group collaboration process and reflection on collaborative content. Two significant behavioral sequences, monitor – > conclude and include – > monitor, were observed in the experimental group. In contrast, monitoring in the control group appeared as isolated actions without connections to other behaviors.

(c) In the green section (Line 3): Support reflected approval of other group members’ opinions. The experimental group displayed sequences of support – > monitor and support – > add, as well as an inner loop of add – > add, indicating that after a group member expressed support, others in the group would monitor or supplement their behavior, and thus the adding behavior might be repeated. In contrast, the control group lacked significant sequential behaviors following support. Furthermore, while the experimental group displayed an internal cycle of add – > add, the control group participants demonstrated an add – > provide sequence, suggesting that following adding behavior, the control group tended to introduce new ideas rather than providing support.

Social network structure

To investigate the effects of social comparison feedback on social network structure, we first conducted a comprehensive observation of postings in both the experimental and control groups. Following this, we analyzed interaction intensity, interaction balance, and interaction quality. Table 5 provides the descriptive statistics for postings in both groups. It is noteworthy that the experimental group posted more comments compared to the control group. Additionally, the number of posts in both groups declined over time.

Interaction intensity

Interaction intensity reflects the frequency of interaction among group members and indicates the level of activity and engagement within the group. To examine the impact of social comparison feedback on interaction intensity, we used two indicators: the average number of posts per person and network density.

Initially, we conducted covariance analysis to compare the average number of posts per person between the experimental and control groups. The group was treated as the independent variable, the number of posts in the initial theme discussion area as the covariate, and the number of posts in the final theme discussion area as the dependent variable. Subsequently, we performed one-way ANCOVA to determine if there was a significant difference between the experimental and control groups in terms of the average number of posts per person. The parallelism test revealed no significant interaction between the independent variable and the covariate ( F  = 0.010, p  = 0.921 > 0.05), and the residual normality test confirmed that the residuals followed a normal distribution with a mean of 0 ( p  = 0.613 > 0.05), supporting the use of covariance analysis. The results of one-way ANCOVA showed a significant difference between the experimental and control groups in the number of posts in the final theme ( F  = 6.04, p  = 0.024 < 0.05). That is, although both groups experienced a reduction in the number of posts over time, the social comparison feedback appeared to attenuate the rate of decline. Thus, social comparison feedback had a significantly positive impact on the number of posts ( p  = 0.024 < 0.05).

Given the small sample size, with both the experimental and control groups consisting of only 11 subgroups, our primary focus was on the descriptive analysis of the density distribution. Table 6 presents the network density distribution for both groups. Overall, the experimental group exhibited a higher mean network density than the control group ( M 1  = 0.67 >  M 2  = 0.57), along with a higher maximum value and a lower minimum value. The numerical distribution was also more clustered, indicating greater network density in the experimental group. Despite these observed differences, the Mann–Whitney U test revealed no significant difference in the density distribution between the experimental and control groups (Experimental group: Mdm  = 0.670, SD  = 0.135; Control group: Mdm  = 0.580, SD  = 0.200; U  = 40.500, Z  = 1.320, p  = 0.187).

Interaction balance

In this study, we analyzed the interaction balance within each group using two metrics: out-degree balance and in-degree balance. Out-degree balance was evaluated via participation homogeneity (Zheng et al., 2021 ), which provided insights into the engagement levels of group members. A higher level of participation homogeneity indicated an uneven distribution of contributions among members. In-degree balance was assessed using in-degree centrality (Zheng et al., 2021 ), a metric that measures the popularity of individuals within the network. A higher level of in-degree centrality suggested that the network was centered around specific individuals.

Box plots were used to visualize the trends in participation homogeneity (reflecting out-degree balance) and in-degree centrality (reflecting in-degree balance) for both the experimental and control groups. To explore whether there were differences in participation homogeneity and in-degree centrality between the experimental and control groups, we conducted statistical tests. Given the small sample size of subgroups in both groups, we used the Mann–Whitney U test (Şimşek, 2023 ). The results showed no significant differences in participation homogeneity between the experimental and control groups (Experimental group: Mdm  = 7.106, SD  = 2.842; Control group: Mdm  = 7.583, SD  = 3.078; U  = 50.000, Z  = 0.690, p  = 0.490). Similarly, there were no significant differences in in-degree centrality (Experimental group: Mdm  = 3.167, SD  = 4.084; Control group: Mdm  = 3.333, SD  = 3.596; U  = 54.5000, Z  = 0.394, p  = 0.693).

Generally, as the average number of posts per person increased, both participation homogeneity and in-degree centrality tended to rise because balancing the number of posts sent and received became more challenging. Figures  6 and 7 illustrate that experimental group 1, control group 1, and experimental group 7 had exceptionally high average posting rates, leading to unusually high participation homogeneity and in-degree centrality. This indirectly highlighted the difficulty in maintaining balance as the average number of posts per person increased. Despite the experimental group posting more, their participation homogeneity and in-degree centrality were slightly lower than those of the control group, although the differences were not statistically significant. This suggested that the experimental group performed slightly better in balancing out-degree and in-degree compared to the control group.

figure 6

Horizontal box plots depicting the distribution of participation homogeneity in the experimental group (blue, dotted pattern) and the control group (red, diagonal stripe pattern)

figure 7

Horizontal box plots depicting the distribution of in-degree centrality in the experimental group (blue, dotted pattern) and the control group (red, diagonal stripe pattern)

Interaction quality

To analyze the effects of social comparison feedback on interaction quality, we utilized Gunawardena’s interaction knowledge construction model (see Sect. ” Knowledge construction level coding scheme ”) to assess the quality of interaction among different cohorts. The consistency between the two raters was very high, with a kappa score of 0.89 (Fleiss, 2003 ). As illustrated in Fig.  8 , most participants were predominantly engaged in the preliminary phase of knowledge construction, focusing mainly on information sharing. This phase accounted for approximately half of the overall interactions. In contrast, participation in the deeper stages of knowledge construction, which involve the application of new knowledge, was significantly lower, barely reaching 5%. This indicated that, during online asynchronous collaborative learning, most knowledge construction activities were centered on identifying and sharing information, with less emphasis on advancing to higher levels of knowledge construction.

figure 8

Detailed distribution of participants’ knowledge construction levels

The Shapiro–Wilk test was conducted to examine the normality of the knowledge construction levels for both the experimental and control groups. The results showed that neither group followed a normal distribution (experimental group: W  = 0.794, p  < 0.001; control group: W  = 0.763, p  < 0.001). Consequently, the Mann–Whitney U test was used to assess whether the knowledge construction levels differed significantly between the experimental and control groups. The results revealed that the knowledge construction level of learners in the experimental group ( Mdm  = 2, SD = 1.135) was significantly higher than that of the control group ( Mdm  = 1, SD = 1.066), with U  = 401,943.000, z  = 2.428, p  = 0.015 < 0.050. This suggested that the experimental group, which received social comparison feedback, demonstrated a higher overall level of knowledge construction compared to the control group, which received self-referential feedback.

Considering the difference in the total number of posts between the experimental and control groups, we compared the percentage of posts at each knowledge construction level. As shown in Fig.  9 , the experimental group exhibited lower proportions of posts in the initial and intermediate stages of knowledge construction compared to the control group but higher proportions in the third, fourth, and fifth stages. This indicated that social comparison feedback promoted a higher level of knowledge construction.

figure 9

Proportion of posts at the various knowledge construction levels in the experimental group (blue, dot pattern) and the control group (red, diagonal stripe pattern)

Group-regulated process

In this study, epistemic networks were developed for both the experimental and control groups. The results indicated that the experimental group, which received social comparison feedback, placed more emphasis on task completion and adjustment compared to the control group. Cialdini and Goldstein ( 2004 ) highlighted that social comparison can effectively integrate learning feedback with goal setting. In online learning environments, social comparison feedback appeared to focus the group’s efforts more effectively, potentially enhancing group-regulation behaviors. Furthermore, while asynchronous collaboration faces challenges such as varying participation times, it offers significant advantages, including increased opportunities for asynchronous discussion and automatic archiving of activities, as noted by Schellens and Valcke ( 2005 ) and Duvall et al. ( 2020 ). This study leveraged these benefits by designing social comparison feedback to enhance asynchronous collaboration. This approach promoted positive regulatory behaviors, demonstrating the value of integrating social comparison within asynchronous collaborative settings.

First, the experimental group frequently exhibited negative emotions, such as questioning, which led to higher-quality outcomes. In contrast, the control group showed more positive emotions, typically manifesting as simple agreement, which did not significantly enhance collaborative knowledge construction. It is speculated that negative emotions triggered further knowledge construction, aligning with previous findings that learners experiencing negative emotions perform better than those experiencing positive emotions (Liaw et al., 2021 ). Simply displaying positive emotions was not sufficient to achieve the same level of positive effects without additional content construction or regulation.

Next, the experimental group engaged in more joking behavior. Qualitative analysis of discussion texts revealed that joking played a significant role in regulating the discussion atmosphere, motivating participants, and expressing friendliness. For example, statements such as “Understanding how important it is to know the direction of one’s efforts, as a teacher, instilling this feeling in students means that education is already halfway to success!” (regulating the discussion atmosphere, motivating others) and “It’s the final stretch! Let’s give it our all together!” (motivating others) exemplified this role. Previous studies have indicated that humor is a crucial form of conversational engagement (Ingram, 2023 ) and plays a vital role in social interaction (Chadwick & Platt, 2018 ). Thus, the increased joking in the experimental group likely facilitated better knowledge construction.

Group interaction behavior

In this study, we analyzed the behavior transition sequences in both the experimental and control groups, leading to the following key findings.

First, the experimental group demonstrated negotiation behavior when receiving information (“offer” behavior) from fellow group members. This behavior indicated a deeper engagement with collaborative content, characterized by critical assessment. In contrast, the control group tended to exhibit “support” behavior, accepting information without rigorous evaluation. Second, the experimental group showed a greater tendency to monitor during the conclusion process. They displayed a bidirectional relationship between concluding and monitoring (conclude < – > monitor), indicating a cautious acceptance of support and summarization behaviors from their peers. Conversely, the control group exhibited self-looping sequences such as conclude – > conclude and conclude – > lead, suggesting a focus on the act of concluding or a tendency to engage in new learning activities. Finally, the experimental group demonstrated monitor and add behaviors when providing support, sometimes involving repetition of support. Conversely, the control group typically did not link support with subsequent monitoring or adding behavior but were more inclined to introduce new viewpoints (lead).

In summary, the experimental group showed more negotiation and monitoring behaviors, indicating a deeper level of reflection. This reflective process is crucial if learners are to accumulate and share knowledge and skills over time, as well as increasing their communication and collaboration capabilities (Yang, 2022 ; Zamora, 1985 ). Overall, the results of this study support Bandura’s social cognitive theory principle that feedback, especially when combined with social comparison, can lead to positive group interaction behavior during the learning process (Bandura, 1991 ).

Social network relationships

In this study, we analyzed the impact of social comparison feedback on group social network relationships using three key indicators: interaction density, interaction balance, and interaction quality.

Interaction density: Interaction density measures the frequency of interactions among group members, revealing their level of activity and enthusiasm. This study used two indicators to assess interaction density: average number of posts per person and network density. The results for average posts per person showed that the experimental group consistently had higher interaction density than the control group. Although network density descriptively appeared higher in the experimental group, the difference was not statistically significant. This lack of significance might have been explained by the limited number of subgroups in both the experimental and control groups, which affected the reliability of network density as an indicator. Nevertheless, considering the results of both indicators, social comparison feedback positively impacted interaction density. It effectively offset the decline in posting activity among participants and enhanced network density. Previous research (Nordin et al., 2022 ) demonstrated that increased interaction typically led to stronger idea exchanges, thereby enhancing the overall online learning experience, which supported the positive influence of social comparison feedback.

Interaction Balance: Interaction balance was assessed using levels of participation homogeneity and in-degree centrality to evaluate the distribution of engagement among group members. Despite the experimental group’s higher number of postings, their levels of participation uniformity and centrality were comparable to those of the control group. This suggests that social comparison feedback effectively maintained, and in some cases even enhanced, a balanced level of participation among members.

Interaction quality: The experimental group demonstrated significantly higher levels of collaborative knowledge construction compared to the control group. Groups exhibited a lower level of knowledge construction in the second phase (discovery of dissonance and inconsistency) than in the third phase (negotiation of meaning/co-construction of knowledge). Despite initial introductions before the experiment, participants were still not very familiar with each other. Due to the entirely online nature of the interactions, this limited familiarity might have led participants to feel that asking questions could bring interpersonal pressure (Kumi-Yeboah, 2018 ). This is consistent with the greater incidence of positive emotions observed in both groups as part of the group-regulated analysis.

During the online asynchronous collaboration, the experimental group received visual social network diagrams and indicators, while the control group only received standard reference values based on their data. The results indicated that the experimental group achieved higher interaction density, balance, and quality compared to the control group. This difference may be because social network metrics in small group collaborations are more sensitive to contextual factors than those in larger networks. The comparative feedback, presented through images and text, likely motivated the experimental group to increase their interactions and improve their social network status. In contrast, the control group’s limited feedback may have restricted their understanding of interaction dynamics, as they lacked the additional motivational element provided to the experimental group.

Practical implications

Application of social comparison feedback: The results of this study affirm the role of social comparison feedback in enhancing adaptive learning outcomes. Future research should investigate the application of social comparison feedback in asynchronous collaborative learning environments to further improve collaboration.

Enhancing discussion forum features: This study introduced pinned posts to the discussion forum to improve information organization. Future enhancements could include filters based on time frame, number of replies, and number of views. Incorporating machine learning methods, as suggested by Ma et al. ( 2023 ), could assist learners in organizing discussion content more effectively. Additionally, using graphic organizers, as proposed by Jeon et al. ( 2022 ), could further enhance the efficacy of online asynchronous collaboration.

Fostering a collaborative atmosphere: Humor in posts was found to positively regulate the learning atmosphere and sharpen task focus. Instructional designers might consider incorporating activities that encourage learners to create and share humor in online learning environments (Song et al., 2021 ) and facilitate intermittent synchronous communication (Hu et al., 2023 ). These strategies could promote a more active collaborative atmosphere and enhance overall collaboration.

Exploring interaction balance indicators: The study indicated that groups with higher average posting and receiving rates tend to exhibit greater participation and centrality. Future research should focus on developing new indicators for online asynchronous collaboration that are less sensitive to posting baselines and group size. Additionally, increasing the volume of discussion could reduce the sensitivity of social network indicators, thereby providing more reliable measures of interaction balance.

Conclusions, limitations, and future work

In this study, we provided social comparison feedback to learners across four dimensions—behavioral, cognitive, interactive, and emotional—to analyze its impact on collaborative learning. Using a randomized controlled trial design, we delivered social comparison feedback to the experimental group, while the control group received only self-referential learning reports. This design enabled us to investigate the effects of social comparison feedback on collaborative learning. The results suggested that social comparison feedback enhanced the regulation of learning processes, stimulated increased monitoring behaviors, and improved social network relationships.

Nevertheless, this study had certain limitations that suggest directions for future research. First, learners were randomly grouped to ensure similar distributions between the experimental and control groups in this study. Future research could explore more effective grouping methods that not only maintain comparability between groups but also enhance the effectiveness of asynchronous collaboration within those groups. Second, we used non-parametric tests to calculate and compare social network-related metrics. However, each group, whether experimental or control, comprised a limited number of subgroups, which may have affected the generalizability of our conclusions. Future studies could include a larger sample size with more subgroups to strengthen the robustness of the results. Third, extending the course duration in future studies could allow more time for asynchronous interaction. Lastly, as the participants were primarily in-service teachers, the conclusions are mainly applicable to this group. Caution should be exercised when generalizing these findings to other populations.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Appel, H., Crusius, J., & Gerlach, A. L. (2015). Social comparison, envy, and depression on facebook: A study looking at the effects of high comparison standards on depressed individuals. Journal of Social & Clinical Psychology, 34 (4), 277–289. https://doi.org/10.1521/jscp.2015.34.4.277

Article   Google Scholar  

Artino, A. R., & Jones, K. D. (2012). Exploring the complex relations between achievement emotions and self-regulated learning behaviors in online learning. The Internet and Higher Education, 15 (3), 170–175. https://doi.org/10.1016/j.iheduc.2012.01.006

Bai, S., Hew, K. F., Sailer, M., & Jia, C. (2021). From top to bottom: How positions on different types of leaderboard may affect fully online student learning performance, intrinsic motivation, and course engagement. Computers & Education, 173 , 104297. https://doi.org/10.1016/j.compedu.2021.104297

Bailey, D., Almusharraf, N., & Hatcher, R. (2020). Finding satisfaction: Intrinsic motivation for synchronous and asynchronous communication in the online language learning context. Education and Information Technologies, 26 (3), 2563–2583. https://doi.org/10.1007/s10639-020-10369-z

Baldwin, M., & Mussweiler, T. (2018). The culture of social comparison. Proceedings of the National Academy of Sciences, 115 (39), E9067–E9074. https://doi.org/10.1073/pnas.1721555115

Bandura, A. (1991). Social cognitive theory of self-regulation. Organizational Behavior and Human Decision Processes., 50 (2), 248–287. https://doi.org/10.1016/0749-5978(91)90022-L

Banihashem, S. K., Kerman, N. T., Noroozi, O., Moon, J., & Drachsler, H. (2024). Feedback sources in essay writing: peer-generated or AI-generated feedback? International Journal of Educational Technology in Higher Education . https://doi.org/10.1186/s41239-024-00455-4

Berk, R. H., Bakeman, R., & Gottman, J. M. (1997). Observing interaction: An introduction to sequential analysis. Technometrics, 34 (1), 112–113. https://doi.org/10.1080/00401706.1992.10485258

Biesenbach-Lucas, S. (2004). Asynchronous web discussions in teacher training courses: Promoting collaborative learning—or not? AACE Journal, 12 (2), 155–170. https://www.researchgate.net/publication/228963766

Blei, D. (2000). https://doi.org/10.1162/jmlr.2003.3.4-5.993 . Applied Physics Letters, 3 (4–5), 993–1022. https://doi.org/10.1162/jmlr.2003.3.4-5.993

Burns, A., Holford, P., & Andronicos, N. (2022). Enhancing understanding of foundation concepts in first year university STEM: Evaluation of an asynchronous online interactive lesson. Interactive Learning Environments, 30 (7), 1170–1182. https://doi.org/10.1080/10494820.2020.1712426

Calvani, A., Fini, A., Molino, M., & Ranieri, M. (2010). Visualizing and monitoring effective interactions in online collaborative groups. British Journal of Educational Technology, 41 (2), 213–226. https://doi.org/10.1111/j.1467-8535.2008.00911.x

Carter, R. A., Jr., Rice, M., Yang, S., & Jackson, H. A. (2020). Self-regulated learning in online learning environments: Strategies for remote learning. Information and Learning Science, 121 (5–6), 321–329. https://doi.org/10.1108/ILS-04-2020-0114

Chadwick, D. D., & Platt, T. (2018). investigating humor in social interaction in people with intellectual disabilities: A systematic review of the literature. Frontiers in Psychology . https://doi.org/10.3389/fpsyg.2018.01745

Chejara, P., Kasepalu, R., Prieto, L. P., Rodríguez-Triana, M. J., Ruiz Calleja, A., & Schneider, B. (2024). How well do collaboration quality estimation models generalize across authentic school contexts? British Journal of Educational Technology, 55 (4), 1602–1624. https://doi.org/10.1111/bjet.13402

Chen, C. M., & Chen, P. C. (2023). A gamified instant perspective comparison system to facilitate online discussion effectiveness. British Journal of Educational Technology, 54 (3), 790–811. https://doi.org/10.1111/bjet.13295

Cialdini, R. B., & Goldstein, N. J. (2004). Social influence: Compliance and conformity. Annual Review of Psychology, 55 , 591–621. https://doi.org/10.1146/annurev.psych.55.090902.142015

Corcoran, K., Kedia, G., Illemann, R., & Innerhofer, H. (2020). Affective consequences of social comparisons by women with breast cancer: An experiment. Frontiers in Psychology, 11 , 1234–1234. https://doi.org/10.3389/fpsyg.2020.01234

Cui, Y., & Schunn, C. D. (2024). Peer feedback that consistently supports learning to write and read: providing comments on meaning-level issues. Assessment and Evaluation in Higher Education . https://doi.org/10.1080/02602938.2024.2364025

Delava, M., Michinov, N., Bohec, O., & Hénaff, B. (2017). How can students’ academic performance in statistics be improved? Testing the influence of social and temporal-self comparison feedback in a web-based training environment. Interative Learning Environments., 25 (1), 35–47. https://doi.org/10.1080/10494820.2015.1090456

Depaepe, F., & König, J. (2018). General pedagogical knowledge, self-efficacy and instructional practice: Disentangling their relationship in pre-service teacher education. Teaching and Teacher Education, 69 , 177–190. https://doi.org/10.1016/j.tate.2017.10.003

Dijkstra, P., Kuyper, H., Werf, G. V. D., Buunk, A. P., & Zee, Y. G. V. D. (2008). Social comparison in the classroom: A review. Review of Educational Research, 78 (4), 828–879. https://doi.org/10.3102/0034654308321210

Duvall, M., Matranga, A., & Silverman, J. (2020). Designing for and facilitating knowledge-building discourse in online courses. Information and Learning Sciences, 121 (7/8), 487–501. https://doi.org/10.1108/ILS-04-2020-0081

Fam, J. Y., Bala Murugan, S., & Yap, C. Y. L. (2020). Envy in social comparison-behaviour relationship: Is social comparison always bad? Psychological Studies, 65 (4), 420–428. https://doi.org/10.1007/s12646-020-00575-7

Festinger, L. (1954). A theory of social comparison processes. Human Relations, 7 (2), 117–140. https://doi.org/10.1177/001872675400700202

Fleiss, J. L., Levin, B. A., & Paik, M. Cho. (2003). Statistical methods for rates and proportions (3rd ed.). J. Wiley.

Flener-Lovitt, C., Bailey, K., & Han, R. (2020). Using structured teams to develop social presence in asynchronous chemistry courses. Journal of Chemical Education, 97 (9), 2519–2525. https://doi.org/10.1021/acs.jchemed.0c00765

Fleur, D. S., van den Bos, W., & Bredeweg, B. (2023). Social comparison in learning analytics dashboard supporting motivation and academic achievement. Computers and Education Open , 4. https://doi.org/10.1016/j.caeo.2023.100130

Fredricks, J. A., Blumenfeld, P. C., & Paris, A. H. (2004). School engagement: Potential of the concept, state of the evidence. Review of Educational Research, 74 (1), 59–109. https://doi.org/10.3102/00346543074001059

Frey, B. A., & Alman, S. W. (2003). Applying adult learning theory to the online classroom. New Horizons in Adult Education & Human Resource Development, 17 (1), 4–12. https://doi.org/10.1002/nha3.10155

Gao, X., Noroozi, O., Gulikers, J., Biemans, H. J., & Banihashem, S. K. (2024). A systematic review of the key components of online peer feedback practices in higher education. Educational Research Review, 42 , 100588. https://doi.org/10.1016/j.edurev.2023.100588

Gegenfurtner, A., & Ebner, C. (2019). Webinars in higher education and professional training: A meta-analysis and systematic review of randomized controlled trials. Educational Research Review, 28 , 100293. https://doi.org/10.1016/j.edurev.2019.100293

Guan, Y. H., Tsai, C. C., & Hwang, F. K. (2006). Content analysis of online discussion on a senior-high-school discussion forum of a virtual physics laboratory. Instructional Science, 34 (4), 279–311. https://doi.org/10.1007/s11251-005-3345-1

Gunawardena, C. N., Lowe, C. A., & Anderson, T. (1997). Analysis of a global online debate and the development of an interaction analysis model for examining social construction of knowledge in computer conferencing. Journal of Educational Computing Research, 17 (4), 397–431. https://doi.org/10.2190/7MQV-X9UJ-C7Q3-NRAG

Han, R., Xu, J., Ge, Y., & Qin, Y. (2020). The impact of social media use on job burnout: The role of social comparison. Frontiers in Public Health, 8 , 588097. https://doi.org/10.3389/fpubh.2020.588097

Hendarwati, E., Nurlaela, L., Bachri, B. S., & Sa’ida, N. (2021). Collaborative problem based learning integrated with online learning. International Journal of Emerging Technologies in Learning, 16 (13), 29–39. https://doi.org/10.3991/ijet.v16i13.24159

Hou, H. T., & Wu, S. Y. (2011). Analyzing the social knowledge construction behavioral patterns of an online synchronous collaborative discussion instructional activity using an instant messaging tool: A case study. Computers & Education, 57 (2), 1459–1468. https://doi.org/10.1016/j.compedu.2011.02.012

Hu, Y. H., Yu, H. Y., Tzeng, J. W., & Zhong, K. C. (2023). Using an avatar-based digital collaboration platform to foster ethical education for university students. Computers and Education, 196 , 104728. https://doi.org/10.1016/j.compedu.2023.104728

Ingram, M. (2023). A (dis)play on words: Emergent bilingual students’ use of verbal jocularity as a channel of the translanguaging corriente. Linguistics and Education, 74 , 101165. https://doi.org/10.1016/j.linged.2023.101165

Jeon, M., Kwon, K., & Bae, H. (2022). Effects of different graphic organizers in asynchronous online discussions. Educational Technology Research and Development, 71 (2), 689–715. https://doi.org/10.1007/s11423-022-10175-z

Joksimovic, S., Gasevic, D., Kovanovic, V., Riecke, B. E., & Hatala, M. (2015). Social presence in online discussions as a process predictor of academic performance. Journal of Computer Assisted Learning, 31 (6), S18–S19. https://doi.org/10.1111/jcal.12107

Kalinowski, E., Egert, F., Gronostaj, A., & Vock, M. (2020). Professional development on fostering students’ academic language proficiency across the curriculum—a meta-analysis of its impact on teachers’ cognition and teaching practices. Teaching and Teacher Education, 88 , 102971. https://doi.org/10.1016/j.tate.2019.102971

Kaufmann, R., & Vallade, J. I. (2020). Exploring connections in the online learning environment: Student perceptions of rapport, climate, and loneliness. Interactive Learning Environments, 30 (10), 1794–1808. https://doi.org/10.1080/10494820.2020.1749670

Kawai, G. (2006). Collaborative peer-based language learning in unsupervised asynchronous online environments. Fourth International Conference on Creating, Connecting and Collaborating through Computing (C5’06), 35–41. https://doi.org/10.1109/C5.2006.12

Kim, Y., Jeong, S., Ji, Y., Lee, S., Kwon, K. H., & Jeon, J. W. (2015). Smartphone response system using twitter to enable effective interaction and improve engagement in large classrooms. IEEE Transactions on Education, 58 (2), 98–103. https://doi.org/10.1109/TE.2014.2329651

Kollöffel, B., & Jong, T. (2016). Can performance feedback during instruction boost knowledge acquisition? Contrasting criterion-based and social comparison feedback. Interactive Learning Environments., 24 (7), 1428–1438. https://doi.org/10.1080/10494820.2015.1016535

Kong, F., Wang, M., Zhang, X., Li, X., & Sun, X. (2021). Vulnerable narcissism in social networking sites: The role of upward and downward social comparisons. Frontiers in Psychology . https://doi.org/10.3389/fpsyg.2021.711909

Kumi-Yeboah, A. (2018). Designing a cross-cultural collaborative online learning framework for online instructors. Online Learning Journal, 22 (4), 181–201. https://doi.org/10.24059/olj.v22i4.1520

Li, J., Tang, Y., Cao, M., & Hu, X. (2018). The moderating effects of discipline on the relationship between asynchronous discussion and satisfaction with MOOCs. Journal of Computers in Education (the Official Journal of the Global Chinese Society for Computers in Education), 5 (3), 279–296. https://doi.org/10.1007/s40692-018-0112-2

Liaw, H., Yu, Y.-R., Chou, C.-C., & Chiu, M.-H. (2021). Relationships between facial expressions, prior knowledge, and multiple representations: A case of conceptual change for kinematics instruction. Journal of Science Education and Technology, 30 (2), 227–238. https://doi.org/10.1007/s10956-020-09863-3

Lin, X., & Sun, Q. (2024). Discussion activities in asynchronous online learning: Motivating adult learners’ interactions. The Journal of Continuing Higher Education, 72 (1), 84–103. https://doi.org/10.1080/07377363.2022.2119803

Liu, S., Hu, T., Chai, H., Su, Z., & Peng, X. (2021). Learners’ interaction patterns in asynchronous online discussions: An integration of the social and cognitive interactions. British Journal of Educational Technology, 53 (1), 23–40. https://doi.org/10.1111/bjet.13147

Lu, Y., Li, K., Sun, Z., Ma, N., & Sun, Y. (2023). Exploring the effects of role scripts and goal-orientation scripts in collaborative problem-solving learning. Education and Information Technologies, 28 , 12191–12213. https://doi.org/10.1007/s10639-023-11674-z

Ma, N., Du, L., & Lu, Y. (2022a). A model of factors influencing in-service teachers’ social network prestige in online peer assessment. Australasian Journal of Educational Technology, 38 (5), 90–108. https://doi.org/10.14742/ajet.7622

Ma, N., Du, L., Lu, Y., & Sun, Y.-F. (2022b). The influence of social network prestige on in-service teachers’ learning outcomes in online peer assessment. Computers and Education Open, 3 , 100087. https://doi.org/10.1016/j.caeo.2022.100087

Ma, N., Zhang, Y.-L., Liu, C.-P., & Du, L. (2023). The comparison of two automated feedback approaches based on automated analysis of the online asynchronous interaction: A case of massive online teacher training. Interactive Learning Environments . https://doi.org/10.1080/10494820.2023.2191252

Merk, S., Poindl, S., Wurster, S., & Bhol, T. (2020). Fostering aspects of pre-service teachers’ data literacy: Results of a randomized controlled trial. Teaching and Teacher Education, 91 , 103043. https://doi.org/10.1016/j.tate.2020.103043

Mussweiler, T. (2003). Comparison processes in social judgment: Mechanisms and consequences. Psychological Review, 110 (3), 472–489. https://doi.org/10.1037/0033-295X.110.3.472

Mussweiler, T., & Epstude, K. (2009). Relatively fast! Efficiency advantages of comparative thinking. Journal of Experimental Psychology: General, 138 (1), 1–21. https://doi.org/10.1037/a0014374

Neugebauer, J., Ray, D. G., & Sassenberg, K. (2016). When being worse helps: The in-fluence of upward social comparisons and knowledge awareness on learner engagement and learning in peer-to-peer knowledge exchange. Learning and Instruction, 44 , 41–52. https://doi.org/10.1016/j.learninstruc.2016.02.007

Nordin, N., Samsudin, M. A., Mansor, A. F., & Ismail, M. E. (2022). Social network analysis to examine the effectiveness of e-PBL with design thinking to foster collaboration: comparisons between high and low self-regulated learners. Journal of Technical Education and Training, 12 (4), 48–59. https://doi.org/10.30880/jtet.2020.12.04.005

Noroozi, O., Alqassab, M., Taghizadeh Kerman, N., Banihashem, S. K., & Panadero, E. (2024). Does perception mean learning? Insights from an online peer feedback setting. Assessment and Evaluation in Higher Education . https://doi.org/10.1080/02602938.2024.2345669

Oh, E. G., Huang, W.-H.D., Hedayati Mehdiabadi, A., & Ju, B. (2018). Facilitating critical thinking in asynchronous online discussion: Comparison between peer- and instructor-redirection. Journal of Computing in Higher Education, 30 (3), 489–509. https://doi.org/10.1007/s12528-018-9180-6

Park, J., Kim, B., & Park, S. (2021). Understanding the behavioral consequences of upward social comparison on social networking sites: The mediating role of emotions. Sustainability . https://doi.org/10.3390/su13115781

Prestridge, S. (2016). Conceptualising self-generating online teacher professional development. Technology, Pedagogy and Education, 26 (1), 85–104. https://doi.org/10.1080/1475939x.2016.1167113

Ray, D. G., Neugebauer, J., & Sassenberg, K. (2017). Learners’ habitual social comparisons can hinder effective learning partner choice. Learning and Individual Differences, 58 , 83–89. https://doi.org/10.1016/j.lindif.2017.08.003

Rogat, T. K., & Adams-Wiggins, K. R. (2015). Interrelation between regulatory and socioemotional processes within collaborative groups characterized by facilitative and directive other-regulation. Computers in Human Behavior, 52 , 589–600. https://doi.org/10.1016/j.chb.2015.01.026

Schellens, T., & Valcke, M. (2005). Collaborative learning in asynchronous discussion groups: What about the impact on cognitive processing? Computers in Human Behavior, 21 (6), 957–975. https://doi.org/10.1016/j.chb.2004.02.025

Schenke, K., Redman, E. J. K. H., Chung, G. K. W. K., Chang, S. M., Feng, T., Parks, C. B., & Roberts, J. D. (2020). Does “Measure Up!” measure up? Evaluation of an iPad app to teach preschoolers measurement concepts. Computers & Education, 146 , 103749. https://doi.org/10.1016/j.compedu.2019.103749

Shaffer, D. W., Collier, W., & Ruis, A. R. (2016). A tutorial on epistemic network analysis: Analyzing the structure of connections in cognitive, social, and interaction data. Journal of Learning Analytics, 3 (3), 9–45. https://doi.org/10.18608/jla.2016.33.3

Shea, P., & Bidjerano, T. (2010). Learning presence: Towards a theory of self-efficacy, self-regulation, and the development of a communities of inquiry in online and blended learning environments. Computers & Education, 55 (4), 1721–1731. https://doi.org/10.1016/j.compedu.2010.07.017

Şimşek, A. S. (2023). The power and type I error of Wilcoxon-Mann-Whitney, Welch’s t, and student’s t tests for Likert-type data. International Journal of Assessment Tools in Education, 10 (1), 114–128. https://doi.org/10.21449/ijate.1183622

Song, K., Williams, K. M., Schallert, D. L., & Pruitt, A. A. (2021). Humor in multimodal language use: Students’ Response to a dialogic, social-networking online assignment. Linguistics and Education, 63 , 100903. https://doi.org/10.1016/j.linged.2021.100903

Sun, Z., Lin, C.-H., Lv, K., & Song, J. (2021). Knowledge-construction behaviors in a mobile learning environment: A lag-sequential analysis of group differences. Educational Technology Research and Development, 69 (2), 533–551. https://doi.org/10.1007/s11423-021-09938-x

Tlili, A., Wang, H., Gao, B., Shi, Y., Zhiying, N., Looi, C.-K., & Huang, R. (2023). Impact of cultural diversity on students’ learning behavioral patterns in open and online courses: A lag sequential analysis approach. Interactive Learning Environments, 31 (6), 3951–3970. https://doi.org/10.1080/10494820.2021.1946565

Verduyn, P., Gugushvili, N., Massar, K., Tht, K., & Kross, E. (2020). Social comparison on social networking sites. Current Opinion in Psychology, 36 , 32–37. https://doi.org/10.1016/j.copsyc.2020.04.002

Wambsganss, T., Janson, A., & Leimeister, J. M. (2022). Enhancing argumentative writing with automated feedback and social comparison nudging. Computers and Education, 191 , 104644. https://doi.org/10.1016/j.compedu.2022.104644

Wang, C., Fang, T., & Gu, Y. (2020). Learning performance and behavioral patterns of online collaborative learning: Impact of cognitive load and affordances of different multimedia. Computers & Education, 143 , 103683. https://doi.org/10.1016/j.compedu.2019.103683

Xie, K., Di Tosto, G., Lu, L., & Cho, Y. S. (2018). Detecting leadership in peer-moderated online collaborative learning through text mining and social network analysis. Internet & Higher Education, 38 , 9–17. https://doi.org/10.1016/j.iheduc.2018.04.002

Yang, C. C. Y. (2023). Lag sequential analysis for identifying blended learners? sequential patterns of e-Book note-taking for self-regulated learning. Educational Technology & Society Journal of International Forum of Educational Technology & Society and IEEE Learning Technology Task Force, 26 (2), 63–75. https://doi.org/10.30191/ETS.202304_26(2).0005

Yang, Y. (2022). Collaborative analytics-supported reflective Assessment for Scaffolding Pre-service Teachers’ collaborative Inquiry and Knowledge Building. International Journal of Computer-Supported Collaborative Learning, 17 (2), 249–292. https://doi.org/10.1007/s11412-022-09372-y

Yang, Y., van Aalst, J., & Chan, C. K. K. (2020). Dynamics of reflective assessment and knowledge building for academically low-achieving students. American Educational Research Journal, 57 (3), 1241–1289. https://doi.org/10.3102/0002831219872444

Zamora, M. D. (1985). Review of The Constitution of Society. Man, 20 (3), 567–568. https://doi.org/10.2307/2802469

Article   MathSciNet   Google Scholar  

Zhang, S., Chen, J., Wen, Y., Chen, H., Gao, Q., & Wang, Q. (2021). Capturing regulatory patterns in online collaborative learning: A network analytic approach. International Journal of Computer-Supported Collaborative Learning, 16 (1), 37–66. https://doi.org/10.1007/s11412-021-09339-5

Zheng, J., Xing, W., & Zhu, G. (2019). Examining sequential patterns of self- and socially shared regulation of STEM learning in a CSCL environment. Computers & Education, 136 , 34–48. https://doi.org/10.1016/j.compedu.2019.03.005

Zheng, Y. F., Zhao, Y. N., & Wang, W. (2021). Research on social relationship analysis and visualization in online collaborative discussions. China Education Info, 2021 (5), 10–17. https://doi.org/10.3969/j.issn.1673-8454.2021.03.004

Zhou, Q. G., Guo, S. C., & Zhou, R. (2015). Investigation about participatory teachers’ training based on MOOC. International Journal of Distance Education Technologies, 13 (3), 44–52. https://doi.org/10.4018/ijdet.2015070103

Download references

Acknowledgements

We express our sincere gratitude to all participants who voluntarily participated in our study and offered invaluable support during the data collection phase.

This work was funded by the “Research on Time-Emotion-Cognition Analysis Model and Automatic Feedback Mechanism of Online Asynchronous Interaction” project [Grant number 62077007], supported by National Natural Science Foundation of China, and by the “Research on Multimodal Process Data-Driven Automatic Analysis and Feedback for Deep Interdisciplinary Leaming” project [Grant number: YLXKPY-XSDW202401] supported by the First-Class Education Discipline Development of Beijing Normal University, China.

Author information

Authors and affiliations.

School of Educational Technology, Faculty of Education, Beijing Normal University, Beijing, 100875, China

Yao Lu, Ning Ma & Wen-Yu Yan

Advanced Innovation Center for Future Education, Beijing Normal University, Beijing, 100875, China

You can also search for this author in PubMed   Google Scholar

Contributions

Yao Lu: Conceptualization, Methodology, Formal analysis, Investigation, Data curation, Writing—original draft, Writing—editing and translation, Writing—review and editing, Visualization, Project administration; Ning Ma: Conceptualization, Methodology, Writing—review and editing, Supervision, Funding acquisition, Project administration; Wenyu Yan: Writing—editing and translation, Writing—review and editing, Supervision, Project administration.

Corresponding author

Correspondence to Ning Ma .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Lu, Y., Ma, N. & Yan, WY. Social comparison feedback in online teacher training and its impact on asynchronous collaboration. Int J Educ Technol High Educ 21 , 55 (2024). https://doi.org/10.1186/s41239-024-00486-x

Download citation

Received : 17 February 2024

Accepted : 26 August 2024

Published : 23 September 2024

DOI : https://doi.org/10.1186/s41239-024-00486-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Teacher professional development
  • Online asynchronous collaboration
  • Group regulation
  • Interaction behavior

conclusion about technology in education

IMAGES

  1. Technology: Shaping the Future of Education

    conclusion about technology in education

  2. Technology and education

    conclusion about technology in education

  3. PPT

    conclusion about technology in education

  4. Integrating Technology into a Classroom

    conclusion about technology in education

  5. Educational technology presentation

    conclusion about technology in education

  6. Effectiveness of using technology in teaching and learning language

    conclusion about technology in education

VIDEO

  1. Growing use of technology in classrooms

  2. Tech in the classroom: The current and future state of education

  3. LG's AI Robot Your New Home Assistant

  4. Impact Of Technology On Our Daily Lives And Society

  5. The Importance of Modern Technology in Schools

  6. Unraveling the Memory Module Plan |Part B| Latest Technology Updates

COMMENTS

  1. Conclusion

    Conclusion. As has ever been true, edtech holds vast potential to improve learning and teaching for every student and teacher in the United States. In recent years, driven by the emergency of a pandemic, schools have found themselves with more connectivity, devices, and digital resources than at any other moment in history. This current context ...

  2. Understanding the role of digital technologies in education: A review

    Educational technology businesses are continually attempting to create novel solutions to expand access to education for individuals who cannot obtain adequate educational facilities. Social media as a learning tool has come a long way. ... Conclusion. Digital technology in the classroom refers to various software and gadgets meant to help ...

  3. How technology is reinventing K-12 education

    In 2023 K-12 schools experienced a rise in cyberattacks, underscoring the need to implement strong systems to safeguard student data. Technology is "requiring people to check their assumptions ...

  4. Is technology good or bad for learning?

    When technology is integrated into lessons in ways that are aligned with good in-person teaching pedagogy, learning can be better than without technology. A 2018 meta-analysis of dozens of ...

  5. The Role of Technology in Education: Enhancing Learning ...

    technology plays in education, as well as how it affects learning objectives and the acquisition of cri tical. 21st-century skills. This study aims to shed light on how technology integra tion in ...

  6. PDF The Impact of Digital Technology on Learning: A Summary for the ...

    Variables analyzed included characteristics of students, teachers, physical settings, and instructional formats. Glass' Δ 40 studies 58 effects Mean 0.309 Median 0.296 range -0.482 to 1.226 Effect sizes higher with more than 10 hours training or CPD (0.40) Teacher written software 0.82 higher than commercial 0.29.

  7. Education reform and change driven by digital technology: a ...

    Amidst the global digital transformation of educational institutions, digital technology has emerged as a significant area of interest among scholars. Such technologies have played an instrumental ...

  8. PDF Technology in education

    enefit and avoids harm.The negative and harmful aspects in the use of digital technology in education and society include risk of distraction an. lack of human contact. Unregulated technology even poses threats to democracy and human rights, for instance through invasion of privac.

  9. Why Do We Need Technology in Education?

    Using the Universal Design for Learning (UDL) (CAST, Inc., 2012) principles as a guide, technology can increase access to, and representation of, content, provide students with a variety of ways to communicate and express their knowledge, and motivate student learning through interest and engagement.

  10. Educational technology: what it is and how it works

    This paper presents an argument that education—the giving and receiving of systematic instruction, the process of facilitating learning, constituted from countless methods, tools, and structures, operated by teachers and many others—may usefully be seen as a technological phenomenon; that all educators are thus educational technologists (albeit that their choices of technology may vary ...

  11. A Comprehensive Review of Educational Technology on ...

    Rapid advances in technology during the last few decades have provided a multitude of new options for teaching and learning. Although technology is being widely adopted in education, there is a shortage of research on the effects that this technology might have on student learning, and why those effects occur. We conducted a comprehensive review of the literature on various uses of digital ...

  12. Realizing the promise: How can education technology improve learning

    Scaling up quality instruction, such as through prerecorded quality lessons. Facilitating differentiated instruction, through, for example, computer-adaptive learning and live one-on-one tutoring ...

  13. New global data reveal education technology's impact on learning

    The promise of technology in the classroom is great: enabling personalized, mastery-based learning; saving teacher time; and equipping students with the digital skills they will need for 21st-century careers. Indeed, controlled pilot studies have shown meaningful improvements in student outcomes through personalized blended learning. 1 John F. Pane et al.,

  14. PDF Reimagining the Role of Technology in Education

    OFFICE OF Educational Technology 3 Introduction One of the most important aspects of technology in education is its ability to level the field of opportunity for students. —John King, U.S. Secretary of Education Technology can be a powerful tool for transforming learning. It can help affirm and advance

  15. Technology in education: GEM Report 2023

    It provides the mid-term assessment of progress towards SDG 4, which was summarized in a brochure and promoted at the 2023 SDG Summit. The 2023 GEM Report and 200 PEER country profiles on technology and education were launched on 26 July. A recording of the global launch event can be watched here and a south-south dialogue between Ministers of ...

  16. Technology in education

    Major advances in technology, especially digitaltechnology, are rapidly transforming the world.Information and communication technology (ICT) hasbeen applied for 100 years in education, ever sincethe popularization of radio in the 1920s. But it is the useof digital technology over the past 40 years that hasthe most significant potential to transform education.An education technology industry ...

  17. Digital learning and transformation of education

    Digital innovation has demonstrated powers to complement, enrich and transform education, and has the potential to speed up progress towards Sustainable Development Goal 4 (SDG 4) for education and transform modes of provision of universal access to learning. It can enhance the quality and relevance of learning, strengthen inclusion, and ...

  18. Information and communication technology (ICT) in education

    Information and Communications Technology (ICT) can impact student learning when teachers are digitally literate and understand how to integrate it into curriculum. Schools use a diverse set of ICT tools to communicate, create, disseminate, store, and manage information.(6) In some contexts, ICT has also become integral to the teaching-learning interaction, through such approaches as replacing ...

  19. Technology might be making education worse

    Technology might be making education worse. Image credit: Kristina Closs. Listen to the essay, as read by Antero Garcia, associate professor in the Graduate School of Education. As a professor of ...

  20. Why technology in education must be on our terms

    The adoption of technology must be guided by a learner-centric, rights-based framework, ensuring appropriateness, equity, evidence-based decisions, and sustainability. The report presents a four-point compass for policy-makers: Look down: Evaluate the context and learning objectives to ensure technology choices strengthen education systems.

  21. AI As A Teaching Assistant: How Schools Can Leverage Technology To

    AI has become an increasingly important tool in modern education as technology advances. Everyone clearly understands that AI will play a pivotal role in education. However, it is essential to understand the profound impact of AI across the spectrum. ... Conclusion. In preparing learners for an AI-driven world, educators must introduce AI ...

  22. Social comparison feedback in online teacher training and its impact on

    In the area of online teacher training, asynchronous collaboration faces several challenges such as limited learner engagement and low interaction quality, thereby hindering its overall effectiveness. Drawing on social comparison theory, providing social comparison feedback to teacher-learners in online asynchronous collaborative learning offers benefits, but also has drawbacks. While social ...