• All eBooks & Audiobooks
  • Academic eBook Collection
  • Home Grown eBook Collection
  • Off-Campus Access
  • Literature Resource Center
  • Opposing Viewpoints
  • ProQuest Central
  • Course Guides
  • Citing Sources
  • Library Research
  • Websites by Topic
  • Book-a-Librarian
  • Research Tutorials
  • Use the Catalog
  • Use Databases
  • Use Films on Demand
  • Use Home Grown eBooks
  • Use NC LIVE
  • Evaluating Sources
  • Primary vs. Secondary
  • Scholarly vs. Popular
  • Make an Appointment
  • Writing Tools
  • Annotated Bibliographies
  • Summaries, Reviews & Critiques
  • Writing Center

Service Alert

logo

Article Summaries, Reviews & Critiques

  • Writing an article SUMMARY
  • Writing an article REVIEW

Writing an article CRITIQUE

  • Citing Sources This link opens in a new window
  • About RCC Library

Text: 336-308-8801

Email: [email protected]

Call: 336-633-0204

Schedule: Book-a-Librarian

Like us on Facebook

Links on this guide may go to external web sites not connected with Randolph Community College. Their inclusion is not an endorsement by Randolph Community College and the College is not responsible for the accuracy of their content or the security of their site.

A critique asks you to evaluate an article and the author’s argument. You will need to look critically at what the author is claiming, evaluate the research methods, and look for possible problems with, or applications of, the researcher’s claims.

Introduction

Give an overview of the author’s main points and how the author supports those points. Explain what the author found and describe the process they used to arrive at this conclusion.

Body Paragraphs

Interpret the information from the article:

  • Does the author review previous studies? Is current and relevant research used?
  • What type of research was used – empirical studies, anecdotal material, or personal observations?
  • Was the sample too small to generalize from?
  • Was the participant group lacking in diversity (race, gender, age, education, socioeconomic status, etc.)
  • For instance, volunteers gathered at a health food store might have different attitudes about nutrition than the population at large.
  • How useful does this work seem to you? How does the author suggest the findings could be applied and how do you believe they could be applied?
  • How could the study have been improved in your opinion?
  • Does the author appear to have any biases (related to gender, race, class, or politics)?
  • Is the writing clear and easy to follow? Does the author’s tone add to or detract from the article?
  • How useful are the visuals (such as tables, charts, maps, photographs) included, if any? How do they help to illustrate the argument? Are they confusing or hard to read?
  • What further research might be conducted on this subject?

Try to synthesize the pieces of your critique to emphasize your own main points about the author’s work, relating the researcher’s work to your own knowledge or to topics being discussed in your course.

From the Center for Academic Excellence (opens in a new window), University of Saint Joseph Connecticut

Additional Resources

All links open in a new window.

Writing an Article Critique (from The University of Arizona Global Campus Writing Center)

How to Critique an Article (from Essaypro.com)

How to Write an Article Critique (from EliteEditing.com.au)

  • << Previous: Writing an article REVIEW
  • Next: Citing Sources >>
  • Last Updated: Mar 15, 2024 9:32 AM
  • URL: https://libguides.randolph.edu/summaries

How to Write an Article Critique Step-by-Step

image

Table of contents

  • 1 What is an Article Critique Writing?
  • 2 How to Critique an Article: The Main Steps
  • 3 Article Critique Outline
  • 4 Article Critique Formatting
  • 5 How to Write a Journal Article Critique
  • 6 How to Write a Research Article Critique
  • 7 Research Methods in Article Critique Writing
  • 8 Tips for writing an Article Critique

Do you know how to critique an article? If not, don’t worry – this guide will walk you through the writing process step-by-step. First, we’ll discuss what a research article critique is and its importance. Then, we’ll outline the key points to consider when critiquing a scientific article. Finally, we’ll provide a step-by-step guide on how to write an article critique including introduction, body and summary. Read more to get the main idea of crafting a critique paper.

What is an Article Critique Writing?

An article critique is a formal analysis and evaluation of a piece of writing. It is often written in response to a particular text but can also be a response to a book, a movie, or any other form of writing. There are many different types of review articles . Before writing an article critique, you should have an idea about each of them.

To start writing a good critique, you must first read the article thoroughly and examine and make sure you understand the article’s purpose. Then, you should outline the article’s key points and discuss how well they are presented. Next, you should offer your comments and opinions on the article, discussing whether you agree or disagree with the author’s points and subject. Finally, concluding your critique with a brief summary of your thoughts on the article would be best. Ensure that the general audience understands your perspective on the piece.

How to Critique an Article: The Main Steps

If you are wondering “what is included in an article critique,” the answer is:

An article critique typically includes the following:

  • A brief summary of the article .
  • A critical evaluation of the article’s strengths and weaknesses.
  • A conclusion.

When critiquing an article, it is essential to critically read the piece and consider the author’s purpose and research strategies that the author chose. Next, provide a brief summary of the text, highlighting the author’s main points and ideas. Critique an article using formal language and relevant literature in the body paragraphs. Finally, describe the thesis statement, main idea, and author’s interpretations in your language using specific examples from the article. It is also vital to discuss the statistical methods used and whether they are appropriate for the research question. Make notes of the points you think need to be discussed, and also do a literature review from where the author ground their research. Offer your perspective on the article and whether it is well-written. Finally, provide background information on the topic if necessary.

When you are reading an article, it is vital to take notes and critique the text to understand it fully and to be able to use the information in it. Here are the main steps for critiquing an article:

  • Read the piece thoroughly, taking notes as you go. Ensure you understand the main points and the author’s argument.
  • Take a look at the author’s perspective. Is it powerful? Does it back up the author’s point of view?
  • Carefully examine the article’s tone. Is it biased? Are you being persuaded by the author in any way?
  • Look at the structure. Is it well organized? Does it make sense?
  • Consider the writing style. Is it clear? Is it well-written?
  • Evaluate the sources the author uses. Are they credible?
  • Think about your own opinion. With what do you concur or disagree? Why?

more_shortcode

Article Critique Outline

When assigned an article critique, your instructor asks you to read and analyze it and provide feedback. A specific format is typically followed when writing an article critique.

An article critique usually has three sections: an introduction, a body, and a conclusion.

  • The introduction of your article critique should have a summary and key points.
  • The critique’s main body should thoroughly evaluate the piece, highlighting its strengths and weaknesses, and state your ideas and opinions with supporting evidence.
  • The conclusion should restate your research and describe your opinion.

You should provide your analysis rather than simply agreeing or disagreeing with the author. When writing an article review , it is essential to be objective and critical. Describe your perspective on the subject and create an article review summary. Be sure to use proper grammar, spelling, and punctuation, write it in the third person, and cite your sources.

Article Critique Formatting

When writing an article critique, you should follow a few formatting guidelines. The importance of using a proper format is to make your review clear and easy to read.

Make sure to use double spacing throughout your critique. It will make it easy to understand and read for your instructor.

Indent each new paragraph. It will help to separate your critique into different sections visually.

Use headings to organize your critique. Your introduction, body, and conclusion should stand out. It will make it easy for your instructor to follow your thoughts.

Use standard fonts, such as Times New Roman or Arial. It will make your critique easy to read.

Use 12-point font size. It will ensure that your critique is easy to read.

more_shortcode

How to Write a Journal Article Critique

When critiquing a journal article, there are a few key points to keep in mind:

  • Good critiques should be objective, meaning that the author’s ideas and arguments should be evaluated without personal bias.
  • Critiques should be critical, meaning that all aspects of the article should be examined, including the author’s introduction, main ideas, and discussion.
  • Critiques should be informative, providing the reader with a clear understanding of the article’s strengths and weaknesses.

When critiquing a research article, evaluating the author’s argument and the evidence they present is important. The author should state their thesis or the main point in the introductory paragraph. You should explain the article’s main ideas and evaluate the evidence critically. In the discussion section, the author should explain the implications of their findings and suggest future research.

It is also essential to keep a critical eye when reading scientific articles. In order to be credible, the scientific article must be based on evidence and previous literature. The author’s argument should be well-supported by data and logical reasoning.

How to Write a Research Article Critique

When you are assigned a research article, the first thing you need to do is read the piece carefully. Make sure you understand the subject matter and the author’s chosen approach. Next, you need to assess the importance of the author’s work. What are the key findings, and how do they contribute to the field of research?

Finally, you need to provide a critical point-by-point analysis of the article. This should include discussing the research questions, the main findings, and the overall impression of the scientific piece. In conclusion, you should state whether the text is good or bad. Read more to get an idea about curating a research article critique. But if you are not confident, you can ask “ write my papers ” and hire a professional to craft a critique paper for you. Explore your options online and get high-quality work quickly.

However, test yourself and use the following tips to write a research article critique that is clear, concise, and properly formatted.

  • Take notes while you read the text in its entirety. Right down each point you agree and disagree with.
  • Write a thesis statement that concisely and clearly outlines the main points.
  • Write a paragraph that introduces the article and provides context for the critique.
  • Write a paragraph for each of the following points, summarizing the main points and providing your own analysis:
  • The purpose of the study
  • The research question or questions
  • The methods used
  • The outcomes
  • The conclusions were drawn by the author(s)
  • Mention the strengths and weaknesses of the piece in a separate paragraph.
  • Write a conclusion that summarizes your thoughts about the article.
  • Free unlimited checks
  • All common file formats
  • Accurate results
  • Intuitive interface

Research Methods in Article Critique Writing

When writing an article critique, it is important to use research methods to support your arguments. There are a variety of research methods that you can use, and each has its strengths and weaknesses. In this text, we will discuss four of the most common research methods used in article critique writing: quantitative research, qualitative research, systematic reviews, and meta-analysis.

Quantitative research is a research method that uses numbers and statistics to analyze data. This type of research is used to test hypotheses or measure a treatment’s effects. Quantitative research is normally considered more reliable than qualitative research because it considers a large amount of information. But, it might be difficult to find enough data to complete it properly.

Qualitative research is a research method that uses words and interviews to analyze data. This type of research is used to understand people’s thoughts and feelings. Qualitative research is usually more reliable than quantitative research because it is less likely to be biased. Though it is more expensive and tedious.

Systematic reviews are a type of research that uses a set of rules to search for and analyze studies on a particular topic. Some think that systematic reviews are more reliable than other research methods because they use a rigorous process to find and analyze studies. However, they can be pricy and long to carry out.

Meta-analysis is a type of research that combines several studies’ results to understand a treatment’s overall effect better. Meta-analysis is generally considered one of the most reliable type of research because it uses data from several approved studies. Conversely, it involves a long and costly process.

Are you still struggling to understand the critique of an article concept? You can contact an online review writing service to get help from skilled writers. You can get custom, and unique article reviews easily.

more_shortcode

Tips for writing an Article Critique

It’s crucial to keep in mind that you’re not just sharing your opinion of the content when you write an article critique. Instead, you are providing a critical analysis, looking at its strengths and weaknesses. In order to write a compelling critique, you should follow these tips: Take note carefully of the essential elements as you read it.

  • Make sure that you understand the thesis statement.
  • Write down your thoughts, including strengths and weaknesses.
  • Use evidence from to support your points.
  • Create a clear and concise critique, making sure to avoid giving your opinion.

It is important to be clear and concise when creating an article critique. You should avoid giving your opinion and instead focus on providing a critical analysis. You should also use evidence from the article to support your points.

Readers also enjoyed

How to Write References and Cite Sources in a Research Paper

WHY WAIT? PLACE AN ORDER RIGHT NOW!

Just fill out the form, press the button, and have no worries!

We use cookies to give you the best experience possible. By continuing we’ll assume you board with our cookie policy.

summative assignment critique of research article

Eberly Center

Teaching excellence & educational innovation, what is the difference between formative and summative assessment, formative assessment.

The goal of formative assessment is to monitor student learning to provide ongoing feedback that can be used by instructors to improve their teaching and by students to improve their learning. More specifically, formative assessments:

  • help students identify their strengths and weaknesses and target areas that need work
  • help faculty recognize where students are struggling and address problems immediately

Formative assessments are generally low stakes , which means that they have low or no point value. Examples of formative assessments include asking students to:

  • draw a concept map in class to represent their understanding of a topic
  • submit one or two sentences identifying the main point of a lecture
  • turn in a research proposal for early feedback

Summative assessment

The goal of summative assessment is to evaluate student learning at the end of an instructional unit by comparing it against some standard or benchmark.

Summative assessments are often high stakes , which means that they have a high point value. Examples of summative assessments include:

  • a midterm exam
  • a final project
  • a senior recital

Information from summative assessments can be used formatively when students or faculty use it to guide their efforts and activities in subsequent courses.

CONTACT US to talk with an Eberly colleague in person!

  • Faculty Support
  • Graduate Student Support
  • Canvas @ Carnegie Mellon
  • Quick Links

creative commons image

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

The PMC website is updating on October 15, 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

Formative vs. summative assessment: impacts on academic motivation, attitude toward learning, test anxiety, and self-regulation skill

Seyed m. ismail.

1 College of Humanities and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia

D. R. Rahul

2 School of Science and Humanities, Shiv Nadar University Chennai, Chennai, India

Indrajit Patra

3 NIT Durgapur, Durgapur, West Bengal India

Ehsan Rezvani

4 English Department, Isfahan (Khorasgan) Branch, Islamic Azad University, Isfahan, Iran

Associated Data

The data that support the findings of this study are available from the corresponding author upon reasonable request.

As assessment plays an important role in the process of teaching and learning, this research explored the impacts of formative and summative assessments on academic motivation, attitude toward learning, test anxiety, and self-regulation skill of EFL students in Iran. To fulfill the objectives of this research, 72 Iranian EFL learners were chosen based on the convenience sampling method assigned to two experimental groups (summative group and formative group) and a control group. Then, the groups took the pre-tests of test anxiety, motivation, and self-regulation skill. Then, one experimental group was trained by following the rules of the formative assessment and the other experimental group was taught according to the summative assessment. The control group was instructed without using any preplanned assessment. After a 15-session treatment, the post-tests of the test anxiety, motivation, and self-regulation skill were administered to all groups to assess the impacts of the instruction on their language achievement. Lastly, a questionnaire of attitude was administered to both experimental groups to examine their attitudes towards the impacts of formative and summative assessment on their English learning improvement. The outcomes of one-way ANOVA and Bonferroni tests revealed that both summative and formative assessments were effective but the formative one was more effective on academic motivation, test anxiety, and self-regulation skill. The findings of one sample t -test indicated that the participants had positive attitudes towards summative and formative assessments. Based on the results, it can be concluded that formative assessment is an essential part of teaching that should be used in EFL instructional contexts. The implications of this study can help students to detect their own weaknesses and target areas that need more effort and work.

Introduction

In teaching and learning, assessment is defined as a procedure applied by instructors and students during instruction through which teachers provide necessary feedbacks to modify ongoing learning and teaching to develop learners’ attainment of planned instructional aims (Robinowitz, 2010 ). According to Popham ( 2008 ), assessment is an intended procedure in which evidence of learners’ status is utilized by educators to adjust their ongoing instructional processes or applied by learners to change their present instructional strategies. Assessment intends to improve learning and it is used to reduce the gap between students’ present instructional situation and their target learning objectives (Heritage, 2012 ).

Two types of assessment are formative and summative. According to Glazer ( 2014 ), summative assessment is generally applied to give learners a numerical score with limited feedback. Therefore, summative assessment is commonly used to measure learning and is rarely used for learning. Educators can make the summative assessment more formative by giving learners the opportunity to learn from exams. This would mean supplying pupils with feedback on exams and making use of the teaching potentiality of exams. Wininger ( 2005 ) proposed an amalgamation of assessment techniques between summative assessment and formative assessment. This marriage between summative assessment and formative assessment is referred to as summative-formative assessment. Based on Wininger, summative-formative assessment is used to review the exam with examinees so they can get feedback on comprehension. Formative-summative assessment occurs in two primary forms: using a mock exam before the final or using the final exam before the retake.

Formative assessment allows for feedback which improves learning while summative assessment measures learning. Formative assessment refers to frequent, interactive assessments of students’ development and understanding to recognize their needs and adjust teaching appropriately (Alahmadi et al., 2019 ). According to Glazer ( 2014 ), formative assessment is generally defined as tasks that allow pupils to receive feedback on their performance during the course. In the classroom, teachers use assessments as a diagnostic tool at the termination of lessons or the termination of units. In addition, teachers can use assessments for teaching, by identifying student misconceptions and bridging gaps in learning through meaningful feedback (Dixson & Worrell, 2016 ). Unfortunately, numerous instructors consider formative assessments as a tool to measure students’ learning, while missing out on its teaching potential. Testing and teaching can be one or the same which will be discussed further in this research (Remmi & Hashim, 2021 ).

According to Black et al. ( 2004 ), using formative tests for formative purposes improves classroom practice whereby students can be encouraged in both reflective and active review of course content. In general terms, formative assessment is concerned with helping students to develop their learning (Buyukkarci & Sahinkarakas, 2021 ). Formative assessment can be considered as a pivotal and valid part of the blending of assessment and teaching (Ozan & Kıncal, 2018 ). Formative assessment helps students gain an understanding of the assessment process and provides them with feedback on how to refine their efforts for improvement. However, in practice, assessment for learning is still in its infancy, and many instructors still struggle with providing productive and timely feedback (Clark, 2011 ).

Using the mentioned assessments can positively affect the test anxiety of the students. Test anxiety signifies the extent to which the students experience apprehension, fear, uneasiness, panic tension, and restlessness while even thinking of forthcoming tests or exams (Ahmad, 2012 ). Anxiety can also be regarded as a product of hesitation about imminent events or situations (Craig et al., 2000 ). Test anxiety is the emotional reaction or status of stress that happens before exams and remains throughout the period of the exams (Sepehrian, 2013 ). Anxiety can commonly be connected to coercions to self-efficacy and evaluations of circumstances as threatening or reactions to a resource of stress to continue (Pappamihiel, 2002 ).

The other variable which can influence the consequences of tests or testing sessions in EFL settings is the attitudes of students towards English culture, English language, and English people. Kara ( 2009 ) stated that attitude about learning together with beliefs and opinions have a significant impact on learners’ behaviors and consequently on their performances. Those learners who have desirable beliefs about language learning are willing to rise more positive attitudes toward language learning. On the other hand, having undesirable beliefs can result in negative attitudes, class anxiety, and low cognitive achievements (Chalak & Kassaian, 2010 ; Tella et al., 2010 ). There are both negative and positive attitudes towards learning. Positive attitudes can develop learning and negative attitudes can become barriers to learning because students have these attitudes as they have difficulties in learning or they just feel that what is presented to them is boring. While a negative attitude toward learning can lead to poor performances of students, a positive attitude can result in appropriate and good performances of students (Ellis, 1994 ).

Woods ( 2015 ) says that instructors should regularly utilize formative assessment to advance the learners’ self-regulation skills and boost their motivation. Motivation is referred to the reasons why people have different behaviors in different situations. Motivation is considered as the intensity and direction of the students’ attempts. The intensity of attempt is referred to the extent that students try to reach their objectives and the direction of attempt is referred to the objectives that students intend to reach (Ahmadi et al., 2009 ; Paul & Elder, 2013 ). Motivation is an inborn phenomenon that is influenced by four agents such as aim (the aim of behaviors, purposes, and tendencies), instrument (instruments used to reach objectives), situation (environmental and outer stimulants), and temper (inner state of the organism). To reach their goals, people first should acquire the essential incentives. For instance, academic accomplishment motivation is significant to scholars (Firouznia et al., 2009 ).

Wiliam ( 2014 ) also asserts that self-regulation learning can be a crucial part of a productive formative assessment concerning the techniques of explaining, sharing, and understanding the instructional goals and students’ success and responsibility for their own learning. Self-regulation skill requires learners to dynamically utilize their cognitive skills; try to achieve their learning aims; receive support from their classmates, parents, and instructors when needed; and most significantly, be responsible for their own learning (Ozan & Kıncal, 2018 ). This research aimed to explore the impacts of using summative and formative assessments of Iranian EFL learners’ academic motivation, attitude toward learning, test anxiety, and self-regulation skill. This study is significant as it compared the effects of two kinds of assessments namely formative and summative on academic motivation, attitude toward learning, test anxiety, and self-regulation skill. As this research investigated the effects of the mentioned assessments on four emotional variables simultaneously, it can be considered as a novel study.

Review of the literature

In the field of teaching English as a foreign language, several researchers and experts defined the term “assessment” as a pivotal component of the process of teaching. According to Brown ( 2003 ), assessment is a process of collecting data about learners’ capabilities to conduct learning tasks. That is, assessment is the way instructors use to gather data about their methods and their pupils’ improvement. Furthermore, the assessment process has got an inseparable component from teaching, since it is impossible to think of teaching without assessments. Brown ( 2003 ) defined assessment in relation to testing. The differences between them refer to the fact that the latter occurs at an identified point of time while the former is an ongoing process that occurs regularly (Brown, 2003 ).

Other scholars explained the meaning of assessment by distinguishing it from evaluation. Regarding the difference between the two, Nunan ( 1992 ) asserted that assessment is referred to the procedures and processes whereby teachers determine what students can do in the target language and added evaluation is referred to a wider range of processes that may or may not include assessment data. In this way, then, assessment is process-oriented while evaluation is product-oriented. Palomba and Banta ( 1999 ) defined assessment as “the systematic collection, review, and use of information about educational programs undertaken to improve learning and development” (p.4). All in all, assessing students’ performances means recognizing and gathering information, receiving feedback, and analyzing and modifying the learning processes. The main goal, thus, is to overcome barriers to learning. Assessment is then used to interpret the performances of students, develop learning, and modify teaching (Aouine, 2011 ; Ghahderijani et al., 2021 ).

Two types of assessment are formative and summative. Popham ( 2008 ) said that it is not the nature of the tests to be labeled as summative or formative but the use to which that tests’ outcomes will be put. That is to say, the summative-formative manifestation of assessment does not stop at being a typology but it expands to be purposive due to the nature of assessment. Summative assessment, then, has been referred to as some criteria. Cizek ( 2010 ) suggests that two criteria can define the summative assessment: (1) it is conducted at the termination of some units and (2) its goal is mainly to characterize the performances of the students or systems. Its major goal is to gain measurement of attainment to be utilized in making decisions.

Through Cizek’s definition, a summative assessment seeks to judge the learners’ performances in every single course. Thus, providing diagnostic information is not what this type of assessment is concerned with. Significantly, the judgments made about the students, teachers, or curricula are meant to grade, certificate, evaluate, and research on how effective curricula are, and these are the purposes of summative assessment according to Cizek ( 2010 ).

According to Black and Wiliam ( 2006 ), summative assessment is given occasionally to assess what pupils know and do not know. This type of assessment is done after the learning has been finalized and provides feedback and information that summarize the learning and teaching process. Typically, no more formal learning is occurring at this stage, other than incidental learning that may happen via completing the assignments and projects (Wuest & Fisette, 2012 ). Summative assessment measures what students have learned and mostly is conducted at the end of a course of instruction (Abeywickrama & Brown, 2010 ; Liu et al., 2021 ; Rezai et al., 2022 ).

For Woods ( 2015 ), the summative assessment provides information to judge the general values of the instructional programs, while the outcomes of formative assessment are used to facilitate the instructional programs. Based on Shepard ( 2006 ), a summative assessment must accomplish its major purpose of documenting what learners know and can do but, if carefully created, should also efficaciously fulfill a secondary objective of learning support.

Brown ( 2003 ) claimed that summative assessment aims at measuring or summarizing what students have learned. This means looking back and taking stock of how well that students have fulfilled goals but does not essentially pave the way to future improvement. Furthermore, the summative assessment also known as assessment of learning is clarified by Spolsky and Halt ( 2008 ) who state that assessment of learning is less detailed, and intends to find out the educational programs or students’ outcomes. Thus, summative assessment is applied to evaluating different language skills and learners’ achievements. Even though summative assessment has a main role in the learners’ evaluation, it is not sufficient to know their advancement and to detect the major areas of weaknesses, and this is the essence of formative assessment (Pinchok & Brandt, 2009 ; Vadivel et al., 2021 ).

The term ‘formative assessment’ has been proposed for years and defined by many researchers. A clearer definition is provided by Brown ( 2003 ) in which he claims that formative assessment is referred to the evaluation of learners in the process of “forming” their skills and competencies to help them to keep up that growth process. It is also described as comprising all those activities conducted by instructors or by their learners that supply information to be utilized as feedback to adjust the learning and teaching activities in which they are involved (Fox et al., 2016 ).

Formative assessments aim to gain immediate feedback on students learning through which strengths and weaknesses of students can be diagnosed. Comprehensively, Wiliam ( 2011 ) suggests: Practices in the classrooms are formative to the extent that evidence about students’ accomplishments is elicited, interpreted, and utilized by instructors, students, or their classmates, to decide about the subsequent steps in the education that are probably to be better or better founded, than the decisions they would have taken in the absence of the evidence that was elicited.

Through this definition, formative assessment actively involves both students’ and teachers’ participation as a key component to develop students’ performance. The assessment for learning, which is based on the aim behind using it, is assessing learners’ progress (McCallum & Milner, 2021 ). Therefore, it is all about gathering data about learners’ achievement to recognize their progress in skills, requirements, and capabilities as their weaknesses and strengths before, during, and after the educational courses to develop students’ learning and achievement (Douglas & Wren, 2008 ).

Besides, Popham ( 2008 ) considered the formative assessment as a strategic procedure in which educators or pupils utilize assessment-based evidence to modify what they are presently performing. That describes it as the planned process that is not randomly occurring. Therefore, formative assessment is an ongoing procedure that provides learners with constructive timely feedback, helping them achieve their learning goals and enhancing their achievements (Vogt et al., 2020 ). Formative assessment is a helpful technique that can provide students with formative help by evaluating the interactions between assessment and learning (Chan, 2021 ; Masita & Fitri, 2020 ).

Some criteria related to formative assessment have been presented by Cizek ( 2010 ). In his opinion, formative assessment attempts to identify students’ levels whether high or low, to provide more help for educators to plan subsequent instruction, to make it easier for students to continue their own learning, review their work, and be able to evaluate themselves. To make learners responsible for their learning and do their research Formative assessment, to Cizek, is a sufficient tool and area for learners and teachers to make proficiency in the learning-teaching process. All in all, concerning specific objectives, formative assessment is a goal-oriented process.

Tahir et al. ( 2012 ) stated that formative assessment is a diagnostic use of assessment that can provide feedback to instructors and learners throughout the instructional process. Marsh ( 2007 ) claimed that formative tests are a type of strategy which are prepared to recognize students’ learning problems to provide a remedial procedure to develop the performances of the majority of the learners. The information that is provided for the learners should be utilized for the assessment to be explained as a formative one. The Assessment Reform Group (ARG) ( 2007 ) explains formative assessment as the procedure to look for and interpret the evidence for instructors and their students to make decisions about where the students fit in their learning, where they need to go, and how best to get there. Kathy ( 2013 ) also argued that formative tests aim to analyze the students’ learning problems to develop their academic attainment.

The theory that is behind our study is the sociocultural theory stating that knowledge is generated in a cooperative way within social contexts. It views learning as a condition wherein learners generate their meanings from the materials and content delivered to them, rather than trying to memorize the information (Vygotsky, 1978 ). Based on sociocultural theory, learning can occur successfully when teachers and students have more interactions with each other.

Some empirical studies are reported here. Alahmadi et al. ( 2019 ) aimed to examine whether a formative speaking assessment produced any effect on learners’ performances in the summative test. Besides, they aimed to observe students’ learning and to provide useful feedbacks that can be applied by educators to develop learners’ achievement and assist them to detect their weaknesses and strengths in speaking skills. Their results indicated that formative assessment helped Saudi learners to solve the problems they encounter in speaking tests.

Mahshanian et al. ( 2019 ) highlighted the significance of summative assessment in conjunction with teacher-based (formative) assessments on the learners’ performances. To do this study, 170 EFL students at the advanced level were chosen and grouped based on the kind of assessment they had received. The subjects in this research were administered exams for two main reasons. First, a general proficiency test was given to put the students at different levels of proficiency. Second, for comparing students’ development according to different kinds of assessments within a 4-month learning duration, an achievement test of the course was administered both as the pre-test and the post-test. The data gained via the scores of the participants on the achievement test received analyses and then compared by utilizing ANCOVA, ANOVA, and t- tests. Based on the outcomes of this research, we can conclude that an amalgamation of summative and formative assessments can result in better achievements for EFL students than either summative or formative assessments discretely.

Imen ( 2020 ) attempted to determine the effects of formative assessments on EFL learners’ writing skills. Indeed, the goal of this study was to recognize the effects of formative assessments on developing the writing skills of first-year master’s students at Abdel Elhamid Ibn Badis University, in Mostaganem. This research also attempted to reveal an essential issue that is the lack of the execution of formative assessments in the writing classrooms. To verify the hypotheses, two tools were applied in this study to gather the data, the teachers’ questionnaire and the students’ questionnaire. The findings of the study revealed that the formative assessment was not extensively used in teaching and learning writing skills, at the University of Mostaganem. The results of both questionnaires showed that if the students were evaluated formatively, their writing skills could be highly enhanced.

Ashdale ( 2020 ) attempted to examine the influences of a particular formative assessment named as Progress Trackers, by comparing a control group that did not receive the Progress Tracker with an experimental group that received the formative-based assessment. The research findings revealed that there were no substantial differences between the experimental and control groups based on the results of the pre-test and the post-test scores. While not statistically significant, the experimental group showed a larger increase in the learners with at least a 60% development in achievement. The lack of significant differences between the experimental group and the control group could be created by the uselessness of the formative assessments or the inability to exclude other factors in the class contexts. This could comprise the uses of other formative assessments applied in both groups, delivery of content, and execution of the formative assessments.

Persaud Singh and Ewert ( 2021 ) investigated the effects of quizzes and mock exams as a formative assessment on working adult learners’ achievement using a quasi-experimental quantitative design. One experimental group received both quizzes and mock exams, another group received mock exams only, and a control group received neither. The data gathered received analyses by utilizing t -tests and ANOVA. The findings indicated noticeable differences in the levels of achievement for the groups receiving formative assessments in comparison to the control participants. The “mock exam” group outperformed slightly than the “quizzes and mock exam” group.

Al Tayib Umar and Abdulmlik Ameen ( 2021 ) traced the effects of formative assessment on Saudi EFL students’ achievement in medical English. The research also tried to figure out teachers’ and students’ attitudes toward formative assessment. The participants involved in this research were 98 students selected among the Preparatory Year learners at a Saudi university. They were assigned to an experimental group and a control group. The experimental students were given their English for Specific Purposes (ESP) courses following the formative assessment techniques whereas the control group was trained in their ESP courses by traditional assessment rules. The experimental group teachers were given intensive training courses in Saudi Arabia and abroad on how to use formative assessment principles in the classrooms. At the end of the experiment that continued for 120 days, the control and experimental groups sat for the end of term examination which was designed for all candidates in the Preparatory College. Grades of all participants in the two groups in the final exam were compared. The performance of the experimental group was found to be meaningfully higher than that of the control group. Instructors’ and students’ attitudes towards formative assessment were positive.

Hamedi et al. ( 2022 ) investigated the effects of using formative assessment by Kahoot application on Iranian EFL students’ vocabulary knowledge as well as their burnout levels. This study was conducted on 60 participants who were in two groups of experimental and control. The results indicated that using formative assessment generated significant effects on of Iranian EFL students’ vocabulary knowledge.

In conclusion, the above studies confirmed the positive effects of summative and formative assessment on language learning. Yet, there are a few kinds of research on comparing the effects of the summative and formative assessments on Iranian EFL learners’ academic motivation, attitude toward learning, test anxiety, and self-regulation skill. Most studies in the domain of assessment examined the effects of the summative and formative assessments on the main skills (reading, speaking, writing, and listening) and they did not pay much attention to the psychosocial variables; therefore, this research posed two questions to cover the existing gap.

  • RQ1. Does using formative and summative assessments positively affect Iranian EFL learners’ test anxiety, academic motivation, and self-regulation skill?
  • RQ2. Do Iranian EFL learners present positive attitudes toward learning through formative and summative assessments?

Methodology

Design of the study, participants.

The participants of this research were 72 Iranian EFL students who have studied English since 2016. The male EFL learners were selected based on the convenience sampling method by administering the Preliminary English Test (PET). They were selected from the Parsian English language institute, located in Ahvaz city, Iran. The participants’ general English proficiency was intermediate and their age average was 21 years old. The participants were divided into two experimental groups (summative and formative) and a control group.

Instrumentations

For homogenizing the subjects in terms of general English proficiency, we gave a version of the PET test, extracted from the book PET Practice Test (Quintana, 2008 ). Because of some limitations, only the sections of reading, grammar, and vocabulary of the test were used in this study. We piloted the test on another similar group and allotted 60 min for answering all its items. Its validity was accepted by some English experts and its reliability was .91.

Britner and Pajares’ ( 2006 ) Science Anxiety Scale (SAS) was used as the other instrument to assess the participants’ test anxiety. Some wordings of the items were changed to make them suitable for measuring test anxiety. There were 12 items in this test that required the participants to consider the items (e.g., I am worried that I will get weak scores in most of the exams) and answer a 6-point scale ranging from certainly false to certainly true. Based on Cronbach’s alpha formula, the reliability index of the anxiety test was .79.

The other tool used in this study was the Self-Regulatory Strategies Scale (SRSS) which was developed by Kadıoğlu et al. ( 2011 ) to assess the self-regulation skills of the participants. The SRSS was a 6-point Likert instrument including never, seldom, occasionally, often, frequently, and constantly. The SRSS consisted of 29 statements in eight dimensions. The results of Cronbach’s alpha formula showed that the reliability of the SRSS was .82.

We used the Attitude/Motivation Test Battery (AMTB) of Gardner ( 2004 ) to evaluate the respondents’ English learning motivation. This measuring instrument had 26 items each with six responses: Highly Disagree, Moderately Disagree, Somewhat Disagree, Somewhat Agree, Moderately Agree, and Highly Agree. We used the Cronbach alpha to measure the reliability of the motivation questionnaire ( r = .87). It should be noted that the motivation questionnaire, the SAS, and the SRSS were used as the pre-tests and post-tests of the research.

The last tool employed in this research was an attitude questionnaire examining the participants’ attitudes towards the effectiveness of summative and formative assessment on their English learning enhancement. The researchers themselves created 17-point Likert- items for this questionnaire and the reliability of this instrument was .80. Likert scale was utilized in the questionnaire to show the amount of disagreement and agreement from 1 to 5 that were highly disagree, disagree, no idea, agree, and highly agree. The validities of all mentioned tools were substantiated by a group of English specialists.

Collecting the needed data

To start the study, first, the PET was administered to 96 EFL learners and 72 intermediate participants were selected among them. As stated previously, the participants were divided into two experimental groups (summative and formative) and one control group. After that, the pretests of test anxiety, motivation, and self-regulation skill were administered to the participants of all groups. After pretesting process, the treatment was conducted on the groups differently; each group received special instruction.

One experimental group was instructed based on the rules of the formative assessment, in the formative group, the teacher (researcher) assisted the students to participate in evaluating their learning via using self and peer assessment. Besides, the teacher’s comprehensive and descriptive elicitation and feedbacks of information about students’ learning were significant in formative class. In fact, there were no tests at the termination of the term and the teacher was flexible concerning the students’ mistakes and provided them with constructive feedback including metalinguistic clues, elicitation, correction, repletion, clarification request, recast, and repletion.

In the summative class, the teacher assessed the students’ learning by giving mid-term and final exams. The teacher did not provide any elaborative feedback, and his feedback was limited to yes/no and true/ false. The control group neither received a formative-based instruction nor a summative-based instruction. The teacher of the control group instructed them without utilizing any preplanned assessments. They finished the course without any formative and summative assessments. After the treatment, the post-tests of the test anxiety, motivation, and self-regulation skill were given to all groups to assess the influences of the intervention on their language achievement. In the final step, the questionnaire of attitude was distributed among both experimental groups to check their opinions about the impacts of summative and formative assessment on their English learning improvement.

The whole study lasted 23 sessions; each took 50 min. In one session, the PET test was administered and in the next three sessions, three pre-tests were conducted. During 15 sessions, the treatment was carried out; in three sessions, three post-tests were given to the participants, and in the last session the attitudinal questionnaire was administered to examine the participants’ attitudes towards the effectiveness of summative and formative assessment of their English learning achievement.

Data analysis

Having prepared all needed data via the procedures mentioned above, some statistical steps were taken to provide answers to the questions raised in this study. First, the data were analyzed descriptively to compute the means of the groups. Second, some one-way ANOVA and Bonferroni tests were used for analyzing the data inferentially. Third, one sample t- test was utilized to analyze the motivation questionnaire data.

Results and discussion

After checking and getting sure about the normality distribution of the data by using the Kolmogorov-Smirnov test, we used several one-way ANOVA tests and reported their results in the following tables:

As we see in Table ​ Table1, 1 , the mean scores of all groups are almost similar. They got almost equal scores on their anxiety pre-test and the three groups were at the same level of anxiety before conducting the instruction. This claim is verified in the following table with the help of one-way ANOVA.

Descriptive statistics of all groups on the test anxiety pre-tests

MeansStd. deviationsStd. errors95% confidence interval for meansMinimumMaximum
Lower boundsUpper bounds
Control2427.7011.372.3222.9032.5114.0049.00
Summative2428.9111.892.4223.8933.9313.0050.00
Formative2428.4110.932.2323.7933.0314.0049.00
Total7228.3411.251.3225.7030.9913.0050.00

According to the Sig value in Table ​ Table2, 2 , there is not a noticeable difference between the test anxiety of all three groups. They were at the same anxiety level at the outset of the study. The inferential statistics show that all the participants had an equal amount of anxiety before they had received the treatment.

Inferential statistics of all groups on the test anxiety pre-tests

Sum of squaresdfMean squares Sig.
Between groups17.6928.84.06.93
Within groups8980.6269130.15
Total8998.3171

As is seen in Table ​ Table3, 3 , the mean scores of all groups are different on the anxiety post-tests. Based on the descriptive statistics, the groups gained different scores on their anxiety post-test and the experimental groups obtained better scores than the control group. This claim is substantiated in the following table by using a one-way ANOVA test.

Descriptive statistics of all groups on the test anxiety post-tests

MeansStd. deviationsStd. errors95% confidence interval for meansMinimumMaximum
Lower boundsUpper bounds
Control2429.9511.082.2625.2734.6314.0051.00
Summative2437.9110.802.2033.3542.4719.0060.00
Formative2449.5010.372.1145.1153.8823.0062.00
Total7239.1213.331.5735.9942.2514.0062.00

Table ​ Table4 4 depicts that the Sig value is less than .00; accordingly, one can conclude that there is a noticeable difference between the test anxiety post-tests of all three groups. They were at different anxiety levels at the end of the research. It seems that the experimental groups outdid the control group on the post-test.

Inferential statistics of all groups on the test anxiety post-tests

Sum of squaresDfMean squares Sig.
Between groups4635.0822317.5420.02.00
Within groups7986.7969115.75
Total12,621.8771

In Table ​ Table5, 5 , the test anxiety level of all groups is compared. This table shows that there are remarkable differences between the anxiety post-tests of the control group and both experimental groups. Also, this table shows that the formative group outdid the control and summative groups. The formative group had the best performance among the three groups of this study.

Multiple comparisons by Bonferroni test (test anxiety)

(I) groups(J) groupsMean differences (I-J)Std. errorsSig.95% confidence intervals
Lower boundsUpper bounds
ControlSummative−7.95 3.10.03−15.57−.33
Formative−19.54 3.10.00−27.16−11.92
SummativeControl7.95 3.10.03.3315.57
Formative−11.58 3.10.00−19.20−3.96
FormativeControl19.54 3.10.0011.9227.16
Summative11.58 3.10.003.9619.20

a The mean differences are significant at the 0.05 level

As observed in Table ​ Table6, 6 , all three groups’ performances on the self-regulation pre-tests are almost the same; their mean scores are almost equal. We used a one-way ANOVA to check the groups’ performances on the self-regulation pre-tests.

Descriptive statistics of the three groups on the self-regulation pre-tests

MeansStd. deviationsStd. errors95% confidence interval for meansMinimumMaximum
Lower boundsUpper bounds
Control2477.5417.023.4770.3584.7339.0099.00
Summative2478.2016.223.3171.3585.0641.00101.00
Formative2476.8316.783.4269.7483.9239.0098.00
Total7277.5216.451.9373.6681.3939.00101.00

In Table ​ Table7, 7 , the inferential statistics of all groups on the self-regulation pre-tests are shown. As Sig (.96) is higher than (0.05), the differences between the three groups are not meaningfully significant. Based on this table, all three groups had the same level of self-regulation ability at the outset of the study.

Inferential statistics of the three groups on the self-regulation pre-tests

Sum of squaresdfMean squares Sig.
Between groups22.69211.34.04.96
Within groups19,203.2569278.30
Total19,225.9471

The mean scores of the control group, the summative group, and the formative group are, 80.12, 130.04, and 147.25, respectively (Table ​ (Table8). 8 ). At the first look, we can say that both experimental participants outflank the control participants since their mean scores are very higher than the mean score of the control group.

Descriptive statistics of the three groups on the self-regulation post-tests

MeansStd. deviationsStd. errors95% confidence interval for meanMinimumMaximum
Lower boundsUpper bounds
Control2480.1217.143.50072.8887.3647.00114.00
Summative24130.0410.442.13125.62134.45109.00146.00
Formative24147.2527.195.55135.76158.7339.00167.00
Total72119.1334.524.06111.02127.2539.00167.00

The results indicate significant differences between the self-regulation post-tests of the groups in favor of the experimental groups (Table ​ (Table9 9 ) . Based on the inferential statistics, the performances of the three groups on the self-regulation post-test are different and the summative group and the formative group outflank the control group.

Inferential statistics of the three groups on the self-regulation post-tests

Sum of squareDfMean squares Sig.
Between groups58,348.52229,174.2676.60.00
Within groups26,278.0869380.84
Total84,626.6171

The outcomes in Table ​ Table10 10 indicate that both experimental groups have better performances than the control group on the self-regulation post-tests. Also, the findings show that the formative group performed better than the other two groups. The treatment had the most effect on the formative group.

Multiple comparisons by Bonferroni test (self-regulation)

(I) groups(J) groupsMean differences (I-J)Std. errorsSig.95% confidence intervals
Lower boundsUpper bounds
ControlSummative−49.91 5.63.00−63.73−36.09
Formative−67.12 5.63.00−80.94−53.30
SummativeControl49.91 5.63.0036.0963.73
Formative−17.20 5.63.01−31.03−3.38
FormativeControl67.12 5.63.0053.3080.94
Summative17.20 5.63.013.3831.03

The control group’s mean score is 90.33, the mean score of the summative group is 91.75, and the mean score of the formative group is 92.45 (Table ​ (Table11). 11 ). Accordingly, we can say that the three groups had an equal degree of motivation before conducting the treatment.

Descriptive statistics of the three groups on the motivation pre-tests

MeansStd. deviationsStd. errors95% confidence interval for meansMinimumMaximum
Lower boundsUpper bounds
Control2490.3325.085.1179.74100.9250.00149.00
Summative2491.7522.084.5082.42101.0755.00128.00
Formative2492.4521.694.4283.29101.6255.00129.00
Total7291.5122.692.6786.1896.8450.00149.00

Table ​ Table12 12 presents the inferential statistics of all groups on the motivation pre-tests. One can see that Sig (.94) is larger than 0.50; consequently, no difference is observed among the groups in terms of motivation pre-tests. The inferential statistics show that the students of the three groups had the same amount of motivation before they had received the treatment.

Inferential statistics of the three groups on the motivation pre-tests

Sum of squaredfMean squares Sig.
Between groups56.19228.09.05.94
Within groups36,519.7969529.27
Total36,575.9871

As shown in the Table ​ Table13, 13 , the mean scores of the summative and formative groups are 115.79 and 127.83, respectively, on the motivation post-tests and the mean of the control group is 92.87. It appears that the experimental participants outperform the control participants on the motivation post-tests as their mean scores are higher than the control group.

Descriptive statistics of the three groups on the motivation post-tests

MeansStd. deviationsStd. errors95% confidence interval for meansMinimumMaximum
Lower boundsUpper bounds
Control2492.8720.994.2884.00101.7460.00129.00
Summative24115.7913.502.75110.09121.4999.00140.00
Formative24127.8312.512.55122.54133.11100.00150.00
Total72112.1621.582.54107.09117.2360.00150.00

In Table ​ Table14, 14 , the inferential statistics of all groups on the motivation post-tests are revealed. The Sig value (.00) is less than 0.50; therefore, the differences between the groups are significant. Indeed, the experimental groups outperformed the control group after the instruction and this betterment can be ascribed to the treatment.

Inferential statistics of the three groups on the motivation post-tests

Sum of squaredfMean squares Sig.
Between groups15,138.0827569.0429.12.00
Within groups17,933.9169259.91
Total33,072.0071

The mean scores of the motivation post-tests are compared in Table ​ Table15. 15 . Accordingly, there are noticeable differences between the post-tests of all groups. The formative participants had better performance than the other two groups. We can say that the formative assessment is more effective than the summative assessment in EFL classes.

Multiple comparisons by Bonferroni test (motivation)

(I) groups(J) groupsMean differences (I-J)Std. errorsSig.95% confidence intervals
Lower boundsUpper bounds
ControlSummative−22.91 4.65.00−34.33−11.49
Formative−34.95 4.65.00−46.37−23.53
SummativeControl22.91 4.65.0011.4934.33
Formative−12.04 4.65.03−23.46−.62
FormativeControl34.95 4.65.0023.5346.37
Summative12.04 4.65.03.6223.46

As depicted in Table ​ Table16, 16 , the amount of statistic T -value is 63.72, df =16, and Sig =0.00 which is less than 0.05. This implies that Iranian students held positive attitudes towards the effectiveness of summative and formative assessments on their language learning improvement.

One-sample test of the attitude questionnaire

Test value = 0
DfSig. (2-tailed)Mean differences95% confidence interval of the differences
LowerUpper
Scores63.7216.0004.524.374.67

Briefly, the results indicate that both experimental groups had better performances than the control group in their post-tests. The formative group had the best performance among the three groups of this study. Additionally, the results reveal that the participants of the present research had positive attitudes towards the effectiveness of both formative and summative assessments on their language learning development.

After analyzing the data, it was found that all three groups were at the same levels of test anxiety, motivation, and self-regulation skill at the outset of the research. But, the performances of the three groups were different at the end of the investigation. Both experimental groups outdid the control group on their post-tests and the formative group performed better among the three groups. Although both types of assessments (summative and formative) were effective on test the anxiety, motivation, and self-regulation skill of EFL learners, the formative assessment was the most effective one. The findings of the current research also indicated that both experimental groups presented positive attitudes toward the implementation of the summative and formative assessments in EFL classes.

The findings gained in this study are supported by Persaud Singh and Ewert ( 2021 ) who inspected the impacts of formative assessment on adult students’ language improvement. They indicated that there were meaningful differences between the formative participants and the control participants in terms of language achievement in favor of the formative participants. Additionally, our research findings are advocated by Alahmadi et al. ( 2019 ) who explored the effects of formative speaking assessments on EFL learners’ performances in speaking tests. They showed that the formative assessment assisted Saudi EFL learners to solve the problems they encountered in speaking tests.

In addition, our study findings are in accordance with Mahshanian et al. ( 2019 ) who confirmed that the amalgamation of summative and formative assessment can result in better achievement in English language learning. Also, our investigation lends support to the findings of Buyukkarci and Sahinkarakas ( 2021 ) who verified the positive effects of using formative assessment on learners’ language achievement. Additionally, the results of the current research are in agreement with Ounis ( 2017 ) who stated that formative assessment facilitated and supported students’ learning. Our study findings are supported by the sociocultural theory which focuses on the role of social interactions among the students and their teachers in the classroom. Based on this perspective, the learning process is mainly a social process and students’ cognitive functions are made based on their interactions with those around them.

Furthermore, our research results are in agreement with the results of Imen ( 2020 ) who discovered the impacts of formative assessments on EFL students’ writing abilities. His results indicated that using formative assessment develops the participants’ writing skills. Moreover, our research outcomes are supported by the impacts of formative assessments on learners’ academic attainment, opinions about lessons, and self-regulation skills in Ozan and Kıncal ( 2018 ) who performed an investigation on the influences of formative assessments on students’ attitudes toward lessons, academic achievement, and self-regulation skill. They revealed that the experimental class that received the treatment by formative assessment practices had better academic performances and more positive attitudes towards the classes than the control class.

Regarding the positive attitudes of the participants towards formative and summative assessment, our results are in line with Tekin ( 2010 ) who discovered that formative assessment practices meaningfully developed students’ attitudes about mathematics learning. That research indicated that the participants in the treatment group had positive attitudes about mathematics learning. In addition, King ( 2003 ) asserted that the formative assessments enhanced the learners’ attitudes about science classes. Also, Hwang and Chang ( 2011 ) revealed that the formative assessment highly boosted the attitudes and interest of students toward learning in local culture classes.

One explanation for the outperformance of the formative group over the other two groups can be the fact that they received much more input. They were provided with different kinds of feedback and took more exams during the semester. These exams and feedback can be the reasons for their successes in language achievement. This is in line with Krashen’s ( 1981 ) input theory stating that if students are exposed to more input, they can learn more.

The other possible explanations for our results are that formative assessments are not graded so they take the anxiety away from the assessees. They also detach the thinking that they must get everything right. Instead, they serve as a practice for students to get assistance along the way before the final tests. Teachers usually check for understanding if students are struggling during the lesson. Teachers address these issues early on instead of waiting until the end of the unit to assess. Teachers have to do less reteaching at the end because many of the problems with mastery are addressed before final tests. The mentioned advantages can be the reasons for our obtained findings.

In addition, monitoring the students’ learning via using the formative assessment can be the other justification for our results. In fact, monitoring the learning process can provide an opportunity for the teachers to give constructive feedback to their students to improve their language learning. When teachers continuously monitor students’ growth and modify instruction to ensure constant development, they find it easier and more predictable to progress towards meeting the standards on summative assessments. By comprehending precisely what their students know before and during the instruction, teachers have much more power to improve the students’ mastery of the subject matter than if they find out after a lesson or unit is complete.

It is important to point out that when instructors continually evaluate the development of their students and modify their curriculum to assure constant improvement, they find that it is simpler and more predictable to make progress toward fulfilling the requirements on summative assessments. If teachers wait until the end of a session or unit to find out how well their learners have mastered the material, they will have considerably less influence over how well their learners learn the material than if they find out how well their learners have mastered it earlier and during teaching. The value of formative assessment lies in the critical information about student comprehension that it provides throughout the process of learning, as well as the chance it gives educators to provide participants with quick and efficient, and action-oriented feedback, as well as the chance to alter their own behavior so that every respondent has the chance to learn and re-learn the material. Learners whose academic performance falls on the extreme ends of the normal curve, such as those who are struggling and those who excel academically, benefit the most from formative evaluation. These learners have learning requirements that are often one of a kind and highly specialized, and to meet those needs, the instructor needs updated data. In addition, making use of frequent formative evaluation as a means to remediate learning gaps brought up by COVID-19 guarantees that educators can promptly give remediation.

The other justification for our findings can be ascribed to the strength of formative assessments that lies in the formative information they provide about the students’ comprehension throughout the learning process and the opportunities they give to teachers to provide the pupils with action-oriented and timely feedback and to change their own behaviors so that each learner has an opportunity to learn and re-learn. More particularly, using formative assessment can assist the students to detect their own weaknesses and strengths and target areas that need more effort and work. All the positive points enumerated for the formative assessments can be the reasons and explanations for the results gained in the current research.

Moreover, the better performance of assessment groups may be due to numerous reasons. In the first place, consistently evaluating students’ progress helps maintain learning objectives at the forefront of one’s mind. This ensures that learners have a distinct goal to strive towards and that instructors have the opportunity to assist clear up misconceptions before learners get off track. Second, engaging in the process of formative assessment enables instructors to gather the information that reveals the requirements of their students. When instructors have a clear grasp of what it takes for their students to be successful, they are better able to design challenging educational environments that push every learner to their full potential. Thirdly, the primary role of formative assessment that will assist in enhancing academic achievement is to provide both learners and instructors with frequent feedback on the achievement that is being made toward their objectives. Learners can bridge the gap between their existing knowledge and their learning objectives through the use of formative assessment (Greensetin, 2010 ). The fourth benefit of doing the formative assessment is an increase in motivation. Formative assessment entails creating learning objectives and monitoring the progress towards those objectives. When learners have a clear idea of where they want to go, their performance dramatically improves. Fifthly, students must identify a purpose for the work that is assigned to them in the classroom. Connecting the learning objectives with real-world problems and situations draws students into the instructional activities and feeds their natural curiosity about the world. Sixthly, an in-depth examination of the data gathered via formative assessment provides the educator with the opportunity to investigate their own methods of teaching and identify those that are successful and those that are not. It is indeed possible that some of the strategies that work for one group of learners won’t work for another. Lastly, students become self-regulated when they are provided with the tools they need to set, track, and ultimately achieve their own learning objectives. Students may develop into self-reliant thinkers if they are exposed to models of high-quality work and given adequate time to reflect on and refine their own work.

The positive effects of formative and summative assessment on students’ motivation are supported by The Self Determination Theory (SDT) of Motivation which is a motivational theory that provides a way of understanding human motivation in any context (Ryan & Deci, 2000 ). SDT attempts to understand human motivation beyond the simple intrinsic/extrinsic model. It suggests that human motivation varies from fully intrinsic motivation, which is characterized by fully autonomous behavior and “for its own sake” to fully extrinsic motivation, which is characterized by behavior that is fully heteronomous and which is instrumentalized to some other end.

In this study, the self-regulatory skills of the students in the EGs where the formative assessment practices were applied did significantly differ from the ones in the CG where no formative assessment practices were applied. Thus, students’ self-regulation was shown to be improved as a result of formative assessment procedures. Similar findings were observed in the experimental research by Xiao and Yang ( 2019 ) that compared the self-regulation abilities of EG and CG learners in secondary school and discovered a substantial difference in favor of the former group. Research findings based on qualitative data reveal that learners engaged in a variety of cognitive techniques and self-regulatory learning practices. The participants acknowledged that they were an integral part of their own learning and that they accepted personal responsibility for their progress. Teachers reported that learners’ ability to self-regulate improved as a result of formative assessment, which fostered ongoing, meaningful, and learning-effort and performance-focused dialogue between teachers and learners. The students’ progress in the areas of self-regulation and metacognitive abilities, as well as their growth in accordance with educational standards, may be supported by a rise in their success in diagnostic examinations thanks to the use of formative assessment (DeLuca et al., 2015 ). In a study that he conducted in 2015, Woods examined the link between formative assessment and self-regulation. He highlighted that teachers who use formative assessment strategies need to comprehend the participants’ self-regulatory learning processes to make appropriate decisions for their classrooms. Furthermore, Woods ( 2015 ) recommended that educators make regular use of formative assessment to foster the growth of learners’ abilities to self-regulate and to boost the motivation levels of their learners. Wiliam ( 2014 ) also asserted that self-regulatory learning could be an important component of an effective formative assessment in relation to the techniques of explaining, sharing, and comprehending the learning goals and success criteria and students taking the responsibility for their own learning.

It is vital to note that learners who have developed self-regulation skills employ their cognitive abilities; work toward their learning objectives; seek out appropriate support from peers, adults, and authority figures; and, most significantly, accept personal accountability for their academic success. As a result, learners’ abilities to self-regulate have a direct effect on the type of formative assessment based on learning and the applications designed to eliminate learning deficiencies. Self-regulation is an ability that needs time and practice to acquire, but it is possible to do so with the right tools and a continuous strategy. Formative assessment techniques were shown to boost learners’ ability to self-regulate, although this effect was found to be small when the study findings were combined with those found in the literature. This finding may be attributed to the fact that, although formative assessment procedures were implemented for an academic year, they were limited to the context of the social research classroom, and students’ abilities to self-regulate may develop and evolve over time.

The findings of this research can increase the knowledge of the students about two types of assessment. This study can encourage students to want their teachers to assess their performances formatively during the semester. Also, the findings of this study can assist instructors to implement more formative-based assessments and feedback in their classes. This study can highlight the importance of frequent input, feedback, and exam for teachers. An exact analysis of formative assessment data permits the teachers to inspect their instructional practices in order to understand which are producing positive results and which are not. Some that are effective for one group of students may not be effective for another group. The implications of this research can help students try to compensate for their deficiencies by taking responsibility for their own learning instead of just attempting to get good grades. In this respect, formative assessments ensure that students can manage the negative variables such as a high level of examination and grading.

Using formative assessments helps teachers gather the information that reveals the students’ needs. Once teachers have an understanding of what students need to be successful, they can generate a suitable learning setting that will challenge each learner to grow. Providing students and teachers with regular feedback on progress towards their aims is the major function of the formative assessments that will help in increasing academic accomplishment. Formative assessments can help the students close the gap between their present knowledge and their learning objectives. Moreover, using formative assessment gives the students evidence of their present progress to actively monitor and modify their own learning. This also provides the students the ability to track their educational objectives. Also, via using formative assessment, the students have the ability to measure their learning at a metacognitive level. As the students are one of the main agents of the teaching-learning process, instructors must share the learning objectives with them. This sharing can develop the students’ learning in basic knowledge and higher order cognitive processes such as application and transfer (Fulmer, 2017 ). In fact, if learners know that they are expected to learn in that lesson, they will concentrate more on those areas. Formative assessments make the teaching more effective by guiding learners to achieve learning objectives, setting learning needs, modifying teaching accordingly, and increasing teachers’ awareness of efficient teaching methods. Lastly, our findings may aid material developers to implement more formative-based assessment activities in the EFL English books.

In conclusion, this study proved the positive impacts of applying formative assessments on Iranian EFL students’ academic motivation, attitude toward learning, test anxiety, and self-regulation skill. Therefore, teachers are strongly recommended to use formative assessment in their classes to help students improve their language learning. Using formative assessment allows teachers to modify instruction according to the results; consequently, making modifications and improvements can generate immediate benefits for their students’ learning.

One more conclusion is that using formative assessment gives the teacher the ability to provide continuous feedback to their students. This allows the students to be part of the learning environment and to improve self-assessment strategies that will help with the understanding of their own thinking processes. All in all, providing frequent feedback during the learning process is regarded as an efficient technique for motivating and encouraging students to learn a language more successfully. Indeed, by assessing students during the lesson, the teachers can aid them to improve their skills and examine if they are progressing or not. Thus, formative assessment is an essential part of teaching that should be used in EFL instructional contexts.

As we could not include many participants in our study, we recommend that future researchers include a large number of participants to increase the generalizability of their results. We worked on male EFL learners; the next studies are required to work on both genders. We could not gather qualitative data to enrich our results; the upcoming researchers are advised to collect both quantitative and qualitative data to develop the validity of their results. Next researchers are called to examine the effects of the summative and formative assessments on language skills and sub-skills. Also, next researchers are offered to inspect the effects of other types of assessments on language skills and subskills as well as on psychological variables involved in language learning.

Acknowledgements

Not applicable.

Abbreviations

EFLEnglish as a foreign language
ANOVAAnalysis of variance
PETPreliminary English Test
SASScience Anxiety Scale
SRSSSelf-Regulatory Strategies Scale
AMTBAttitude/Motivation Test Battery
SDTSelf Determination Theory
EGExperimental group
CGControl group

Authors’ contributions

All authors had equal contributions. The author(s) read and approved the final manuscript.

Authors’ information

Seyed M. Ismail is an assistant professor at Prince Sattam Bin Abdulaziz University, Saudi Arabia. His research interests are teaching and learning, testing, and educational strategies. He published many papers in different journals.

D. R. Rahul is an assistant professor School of Science and Humanities, Shiv Nadar University Chennai, Chennai, India. He has published several research papers in national and international language teaching journals.

Indrajit Patra is an Independent Researcher. He got his PhD from NIT Durgapur, West Bengal, India.

Ehsan Rezvani is an assistant professor in Applied Linguistics at Islamic Azad University, Isfahan (Khorasgan) Branch, Isfahan, Iran. He has published many research papers in national and international language teaching journals.

We did not receive any funding at any stage.

Availability of data and materials

Declarations.

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Seyed M. Ismail, Email: [email protected] .

D. R. Rahul, Email: moc.liamg@ttinrdluhar .

Indrajit Patra, Email: moc.liamg@0nortengampi .

Ehsan Rezvani, Email: [email protected] .

  • Abeywickrama P, Brown HD. Language assessment: Principles and classroom practices. Pearson Longman; 2010. [ Google Scholar ]
  • Ahmad S. Relationship of academic SE to self-regulated learning, SI, test anxiety and academic achievement. International Journal of Education. 2012; 4 (1):12–25. doi: 10.5296/ije.v4i1.1091. [ CrossRef ] [ Google Scholar ]
  • Ahmadi S, Namazizadeh M, Abdoli B, Seyedalinejad A. Comparison of achievement motivation of football players between the top and bottom teams of the Football Premier League. Olympic Quarterly. 2009; 17 (3):19–27. [ Google Scholar ]
  • Al Tayib Umar A, Abdulmlik Ameen A. The effects of formative evaluation on students’ achievement in English for specific purposes. Journal of Educational Research and Reviews. 2021; 9 (7):185–197. doi: 10.33495/jerr_v9i7.21.134. [ CrossRef ] [ Google Scholar ]
  • Alahmadi N, Alrahaili M, Alshraideh D. The impact of the formative assessment in speaking test on Saudi students’ performance. Arab World English Journal. 2019; 10 (1):259–270. doi: 10.24093/awej/vol10no1.22. [ CrossRef ] [ Google Scholar ]
  • Aouine A. English language assessment in the Algerian middle and secondary schools: A context evaluation . 2011. [ Google Scholar ]
  • Ashdale M. The effect of formative assessment on achievement and motivation . 2020. [ Google Scholar ]
  • Assessment Reform Group . Assessment for learning. 2007. [ Google Scholar ]
  • Black P, Harrison C, Lee C, Marshall B, Wiliam D. Assessment for learning: Putting it into practice. Open University Press; 2004. [ Google Scholar ]
  • Black P, Wiliam D. Assessment for learning in the classroom. Assessment and Learning. 2006; 5 :9–25. [ Google Scholar ]
  • Britner SL, Pajares F. Sources of science SE beliefs of middle school students [Electronic version] Journal of Research in Science and Teaching. 2006; 43 (5):485–499. doi: 10.1002/tea.20131. [ CrossRef ] [ Google Scholar ]
  • Brown HD. Language assessment principles and classroom practices. Oxford university press; 2003. [ Google Scholar ]
  • Buyukkarci K, Sahinkarakas S. The impact of formative assessment on students’ assessment preferences. The Reading Matrix: An International Online Journal. 2021; 21 (1):142–161. [ Google Scholar ]
  • Chalak A, Kassaian Z. Motivation and attitudes of Iranian undergraduate EFL students towards learning English. GEMA Online Journal of Language Studies. 2010; 10 (2):37–56. [ Google Scholar ]
  • Chan KT. Embedding formative assessment in blended learning environment: The case of secondary Chinese language teaching in Singapore. Education Sciences. 2021; 11 (7):360. doi: 10.3390/educsci11070360. [ CrossRef ] [ Google Scholar ]
  • Cizek GJ. An introduction to formative assessment: History, characteristics, and challenges. In: Andrade HL, Cizek GJ, editors. Handbook of formative assessment . Routledge; 2010. pp. 3–17. [ Google Scholar ]
  • Clark I. Formative assessment: Policy, perspectives and practice. Florida Journal of Educational Administration & Policy. 2011; 4 (2):158–180. [ Google Scholar ]
  • Craig KJ, Brown KJ, Baum A. Environmental factors in the etiology of anxiety. 2000. [ Google Scholar ]
  • DeLuca C, Klinger D, Pyper J, Woods J. Instructional rounds as a professional learning model for systemic implementation of Assessment for Learning. Assessment in Education: Principles, Policy & Practice. 2015; 22 (1):122–139. doi: 10.1080/0969594X.2014.967168. [ CrossRef ] [ Google Scholar ]
  • Dixson DD, Worrell FC. Formative and summative assessment in the classroom. Theory into practice. 2016; 55 (2):153–159. doi: 10.1080/00405841.2016.1148989. [ CrossRef ] [ Google Scholar ]
  • Douglas G, Wren D. Using formative assessment to increase learning . Virginia Beach City Public Schools; 2008. [ Google Scholar ]
  • Ellis R. The study of second language acquisition. Oxford University Press; 1994. [ Google Scholar ]
  • Firouznia S, Yousefi A, Ghassemi G. The relationship between academic motivation and academic achievement in medical students of Isfahan University of Medical Sciences. Iranian Journal of Medical Education. 2009; 9 (1):79–84. [ Google Scholar ]
  • Fox J, Haggerty J, Artemeva N. Mitigating risk: The impact of a diagnostic assessment procedure on the first-year experience in engineering. In: Read J, editor. Post-admission language assessment of university students. Springer; 2016. pp. 43–65. [ Google Scholar ]
  • Fulmer SM. Should we share learning outcomes / objectives with students at the start of a lesson? 2017. [ Google Scholar ]
  • Gardner RC. Attitude/Motivation test battery: International AMTB research project. The University of Western Ontario; 2004. [ Google Scholar ]
  • Ghahderijani BH, Namaziandost E, Tavakoli M, Kumar T, Magizov R. The comparative effect of group dynamic assessment (GDA) and computerized dynamic assessment (C-DA) on Iranian upper-intermediate EFL learners’ speaking complexity, accuracy, and fluency (CAF) Lang Test Asia. 2021; 11 :25. doi: 10.1186/s40468-021-00144-3. [ CrossRef ] [ Google Scholar ]
  • Glazer N. Formative plus summative assessment in large undergraduate courses: Why both? International Journal of Teaching and Learning in Higher Education. 2014; 26 (2):276–286. [ Google Scholar ]
  • Greensetin L. What teachers really need to know about formative assessment. ASCD; 2010. [ Google Scholar ]
  • Hamedi A, Fakhraee Faruji L, Amiri Kordestani L. The effectiveness of using formative assessment by Kahoot application on Iranian Intermediate EFL learners’ vocabulary knowledge and burnout level. Journal of new advances in English Language Teaching and Applied Linguistics. 2022; 4 (1):768–786. [ Google Scholar ]
  • Heritage M. From formative assessment: Improving teaching and learning . 2012. [ Google Scholar ]
  • Hwang HJ, Chang HF. A formative assessment-based mobile learning approach to improving the learning attitudes and achievements of students. Computers and Education. 2011; 56 :1023–1031. doi: 10.1016/j.compedu.2010.12.002. [ CrossRef ] [ Google Scholar ]
  • Imen . The impact of formative assessment on EFL students’ writing skill . 2020. [ Google Scholar ]
  • Kadıoğlu C, Uzuntiryaki E, Çapa-Aydın Y. Development of self-regulatory strategies scale (SRSS) Eğitim ve Bilim. 2011; 36 (160):11–23. [ Google Scholar ]
  • Kara A. The effect of a ‘learning theories’ unit on students’ attitudes towards learning. Australian Journal of Teacher Education. 2009; 34 (3):100–113. doi: 10.14221/ajte.2009v34n3.5. [ CrossRef ] [ Google Scholar ]
  • Kathy D. 22 essay assessment technique for measuring in teaching learning. 2013. [ Google Scholar ]
  • King MD. The effects of formative assessment on student self-regulation, motivational beliefs, and achievement in elementary science (Doctoral dissertation) 2003. [ Google Scholar ]
  • Krashen S. Second language acquisition and second language learning. Pergamon Press; 1981. [ Google Scholar ]
  • Liu, F., Vadivel, B., Mazaheri, F., Rezvani, E., & Namaziandost, E. (2021). Using games to promote efl learners’ willingness to communicate (WTC): Potential effects and teachers’ attitude in focus. Frontiers in Psychology , 4526 . [ PMC free article ] [ PubMed ]
  • Mahshanian A, Shoghi R, Bahram M. Investigating the differential effects of formative and summative assessment on EFL learners’ end-of-term achievement. Journal of Language Teaching and Research. 2019; 10 (5):1055–1066. doi: 10.17507/jltr.1005.19. [ CrossRef ] [ Google Scholar ]
  • Marsh CJ. A critical analysis of the use of formative assessment in schools. Educational research for policy and practice. 2007; 6 (1):25–29. doi: 10.1007/s10671-007-9024-z. [ CrossRef ] [ Google Scholar ]
  • Masita M, Fitri N. The use of Plickers for formative assessment of vocabulary mastery. Ethical Lingua Journal of Language Teaching and Literature. 2020; 7 (2):311–320. doi: 10.30605/25409190.179. [ CrossRef ] [ Google Scholar ]
  • McCallum S, Milner MM. The effectiveness of formative assessment: student views and staff reflections. Assessment and Evaluation in Higher Education. 2021; 46 (1):1–16. doi: 10.1080/02602938.2020.1754761. [ CrossRef ] [ Google Scholar ]
  • Nunan D. Research methods in language learning. CUP; 1992. [ Google Scholar ]
  • Ounis A. The assessment of speaking skills at the tertiary level. International Journal of English Linguistics. 2017; 7 (4):95–113. doi: 10.5539/ijel.v7n4p95. [ CrossRef ] [ Google Scholar ]
  • Ozan C, Kıncal RY. The effects of formative assessment on academic achievement, attitudes toward the lesson, and self-regulation skills. Educational Sciences: Theory and Practice. 2018; 18 :85–118. [ Google Scholar ]
  • Palomba CA, Banta TW. Assessment essentials: Planning, implementing, and improving assessment in higher education. Jossey-Bass Publishers; 1999. [ Google Scholar ]
  • Pappamihiel NE. English as a second language students and English language anxiety: Issues in the mainstream classroom. ProQuest Education Journal. 2002; 36 (3):327–355. [ Google Scholar ]
  • Paul R, Elder L. Critical thinking: Tools for taking charge of your professional and personal life. Pearson Education; 2013. [ Google Scholar ]
  • Persaud Singh V, Ewert D. The effect of formative assessment on performance in summative assessment: A study on business English students in a language training center . 2021. [ Google Scholar ]
  • Pinchok N, Brandt WC. Connecting formative assessment research to practice: An introductory guide for educators. Learning point; 2009. [ Google Scholar ]
  • Popham WJ. Classroom assessment: What teachers need to know. 5. Prentice Hall; 2008. [ Google Scholar ]
  • Quintana J. PET practice tests. Oxford University Press; 2008. [ Google Scholar ]
  • Remmi F, Hashim H. Primary school teachers’ usage and perception of online formative assessment tools in language assessment. International Journal of Academic Research in Progressive Education and Development. 2021; 10 (1):290–303. doi: 10.6007/IJARPED/v10-i1/8846. [ CrossRef ] [ Google Scholar ]
  • Rezai A, Namaziandost E, Miri M, Kumar T. Demographic biases and assessment fairness in classroom: Insights from Iranian university teachers. Language Testing in Asia. 2022; 12 (1):1–20. doi: 10.1186/s40468-022-00157-6. [ CrossRef ] [ Google Scholar ]
  • Robinowitz A. From principles to practice: An embedded assessment system. Applied Measurement in Education. 2010; 13 (2):181–208. [ Google Scholar ]
  • Ryan RM, Deci EL. Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist. 2000; 55 (1):68–95. doi: 10.1037/0003-066X.55.1.68. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sepehrian A. Self-Efficacy, achievement motivation and academic procrastination as predictors of academic achievement in pre-college students. Proceeding of the Global Summit on Education. 2013; 6 :173–178. [ Google Scholar ]
  • Shepard LA. Classroom assessment. In: Brennan RL, editor. Educational measurement. 4. American Council on Education/Praeger; 2006. pp. 623–646. [ Google Scholar ]
  • Spolsky B, Halt FM. The handbook of educational linguistics. Blackwell; 2008. [ Google Scholar ]
  • Tahir, M., Tariq, H., Mubashira, K., & Rabbia, A. (2012). Impact of formative assessment on academic achievement of secondary school students. International Journal of Business and Social Science , 3 (17) http://myflorida.com/apps/vbs/vbs_www.ad.view_ad?advertisement_key_num=107800 .
  • Tekin EG. Matematik eğitiminde biçimlendirici değerlendirmenin etkisi [Effect of formative assessment in mathematics education] 2010. [ Google Scholar ]
  • Tella J, Indoshi FC, Othuon LA. Relationship between students’ perspectives on the secondary school English curriculum and their academic achievement in Kenya. Research. 2010; 1 (9):390–395. [ Google Scholar ]
  • Vadivel B, Namaziandost E, Saeedian A. Progress in English language teaching through continuous professional development—teachers’ self-awareness, perception, and feedback. Frontiers in Education. 2021; 6 :757285. doi: 10.3389/feduc.2021.757285. [ CrossRef ] [ Google Scholar ]
  • Vogt K, Tsagari D, Csépes I, Green A, Sifakis N. Linking learners’ perspectives on language assessment practices to teachers’ assessment literacy enhancement (TALE): Insights from four European countries. Language Assessment Quarterly. 2020; 17 (4):410–433. doi: 10.1080/15434303.2020.1776714. [ CrossRef ] [ Google Scholar ]
  • Vygotsky LS. Mind in society: The development of higher psychological processes. Harvard University Press; 1978. [ Google Scholar ]
  • Wiliam D. Embedded formative assessment. Solution Tree; 2011. [ Google Scholar ]
  • Wiliam D. Formative assessment and contingency in the regulation of learning processes . 2014. [ Google Scholar ]
  • Wininger SR. Using your tests to teach: Formative summative assessment. Teaching of Psychology. 2005; 32 (3):164–166. doi: 10.1207/s15328023top3203_7. [ CrossRef ] [ Google Scholar ]
  • Woods, N. (2015). Formative assessment and self-regulated learning. The Journal of Education Retrieved from https://thejournalofeducation.wordpress.com/2015/05/20/formative-assessment-and-self-regulated-learning/ .
  • Wuest DA, Fisette JL. Foundations of physical education, exercise science, and sport. 17. McGraw-Hill; 2012. [ Google Scholar ]
  • Xiao Y, Yang M. Formative assessment and self-regulated learning: How formative assessment supports students' self-regulation in English language learning. System. 2019; 81 :39–49. doi: 10.1016/j.system.2019.01.004. [ CrossRef ] [ Google Scholar ]

SYSTEMATIC REVIEW article

A critical review of research on student self-assessment.

\nHeidi L. Andrade

  • Educational Psychology and Methodology, University at Albany, Albany, NY, United States

This article is a review of research on student self-assessment conducted largely between 2013 and 2018. The purpose of the review is to provide an updated overview of theory and research. The treatment of theory involves articulating a refined definition and operationalization of self-assessment. The review of 76 empirical studies offers a critical perspective on what has been investigated, including the relationship between self-assessment and achievement, consistency of self-assessment and others' assessments, student perceptions of self-assessment, and the association between self-assessment and self-regulated learning. An argument is made for less research on consistency and summative self-assessment, and more on the cognitive and affective mechanisms of formative self-assessment.

This review of research on student self-assessment expands on a review published as a chapter in the Cambridge Handbook of Instructional Feedback ( Andrade, 2018 , reprinted with permission). The timespan for the original review was January 2013 to October 2016. A lot of research has been done on the subject since then, including at least two meta-analyses; hence this expanded review, in which I provide an updated overview of theory and research. The treatment of theory presented here involves articulating a refined definition and operationalization of self-assessment through a lens of feedback. My review of the growing body of empirical research offers a critical perspective, in the interest of provoking new investigations into neglected areas.

Defining and Operationalizing Student Self-Assessment

Without exception, reviews of self-assessment ( Sargeant, 2008 ; Brown and Harris, 2013 ; Panadero et al., 2016a ) call for clearer definitions: What is self-assessment, and what is not? This question is surprisingly difficult to answer, as the term self-assessment has been used to describe a diverse range of activities, such as assigning a happy or sad face to a story just told, estimating the number of correct answers on a math test, graphing scores for dart throwing, indicating understanding (or the lack thereof) of a science concept, using a rubric to identify strengths and weaknesses in one's persuasive essay, writing reflective journal entries, and so on. Each of those activities involves some kind of assessment of one's own functioning, but they are so different that distinctions among types of self-assessment are needed. I will draw those distinctions in terms of the purposes of self-assessment which, in turn, determine its features: a classic form-fits-function analysis.

What is Self-Assessment?

Brown and Harris (2013) defined self-assessment in the K-16 context as a “descriptive and evaluative act carried out by the student concerning his or her own work and academic abilities” (p. 368). Panadero et al. (2016a) defined it as a “wide variety of mechanisms and techniques through which students describe (i.e., assess) and possibly assign merit or worth to (i.e., evaluate) the qualities of their own learning processes and products” (p. 804). Referring to physicians, Epstein et al. (2008) defined “concurrent self-assessment” as “ongoing moment-to-moment self-monitoring” (p. 5). Self-monitoring “refers to the ability to notice our own actions, curiosity to examine the effects of those actions, and willingness to use those observations to improve behavior and thinking in the future” (p. 5). Taken together, these definitions include self-assessment of one's abilities, processes , and products —everything but the kitchen sink. This very broad conception might seem unwieldy, but it works because each object of assessment—competence, process, and product—is subject to the influence of feedback from oneself.

What is missing from each of these definitions, however, is the purpose of the act of self-assessment. Their authors might rightly point out that the purpose is implied, but a formal definition requires us to make it plain: Why do we ask students to self-assess? I have long held that self-assessment is feedback ( Andrade, 2010 ), and that the purpose of feedback is to inform adjustments to processes and products that deepen learning and enhance performance; hence the purpose of self-assessment is to generate feedback that promotes learning and improvements in performance. This learning-oriented purpose of self-assessment implies that it should be formative: if there is no opportunity for adjustment and correction, self-assessment is almost pointless.

Why Self-Assess?

Clarity about the purpose of self-assessment allows us to interpret what otherwise appear to be discordant findings from research, which has produced mixed results in terms of both the accuracy of students' self-assessments and their influence on learning and/or performance. I believe the source of the discord can be traced to the different ways in which self-assessment is carried out, such as whether it is summative and formative. This issue will be taken up again in the review of current research that follows this overview. For now, consider a study of the accuracy and validity of summative self-assessment in teacher education conducted by Tejeiro et al. (2012) , which showed that students' self-assigned marks tended to be higher than marks given by professors. All 122 students in the study assigned themselves a grade at the end of their course, but half of the students were told that their self-assigned grade would count toward 5% of their final grade. In both groups, students' self-assessments were higher than grades given by professors, especially for students with “poorer results” (p. 791) and those for whom self-assessment counted toward the final grade. In the group that was told their self-assessments would count toward their final grade, no relationship was found between the professor's and the students' assessments. Tejeiro et al. concluded that, although students' and professor's assessments tend to be highly similar when self-assessment did not count toward final grades, overestimations increased dramatically when students' self-assessments did count. Interviews of students who self-assigned highly discrepant grades revealed (as you might guess) that they were motivated by the desire to obtain the highest possible grades.

Studies like Tejeiro et al's. (2012) are interesting in terms of the information they provide about the relationship between consistency and honesty, but the purpose of the self-assessment, beyond addressing interesting research questions, is unclear. There is no feedback purpose. This is also true for another example of a study of summative self-assessment of competence, during which elementary-school children took the Test of Narrative Language and then were asked to self-evaluate “how you did in making up stories today” by pointing to one of five pictures, from a “very happy face” (rating of five) to a “very sad face” (rating of one) ( Kaderavek et al., 2004 . p. 37). The usual results were reported: Older children and good narrators were more accurate than younger children and poor narrators, and males tended to more frequently overestimate their ability.

Typical of clinical studies of accuracy in self-evaluation, this study rests on a definition and operationalization of self-assessment with no value in terms of instructional feedback. If those children were asked to rate their stories and then revise or, better yet, if they assessed their stories according to clear, developmentally appropriate criteria before revising, the valence of their self-assessments in terms of instructional feedback would skyrocket. I speculate that their accuracy would too. In contrast, studies of formative self-assessment suggest that when the act of self-assessing is given a learning-oriented purpose, students' self-assessments are relatively consistent with those of external evaluators, including professors ( Lopez and Kossack, 2007 ; Barney et al., 2012 ; Leach, 2012 ), teachers ( Bol et al., 2012 ; Chang et al., 2012 , 2013 ), researchers ( Panadero and Romero, 2014 ; Fitzpatrick and Schulz, 2016 ), and expert medical assessors ( Hawkins et al., 2012 ).

My commitment to keeping self-assessment formative is firm. However, Gavin Brown (personal communication, April 2011) reminded me that summative self-assessment exists and we cannot ignore it; any definition of self-assessment must acknowledge and distinguish between formative and summative forms of it. Thus, the taxonomy in Table 1 , which depicts self-assessment as serving formative and/or summative purposes, and focuses on competence, processes, and/or products.

www.frontiersin.org

Table 1 . A taxonomy of self-assessment.

Fortunately, a formative view of self-assessment seems to be taking hold in various educational contexts. For instance, Sargeant (2008) noted that all seven authors in a special issue of the Journal of Continuing Education in the Health Professions “conceptualize self-assessment within a formative, educational perspective, and see it as an activity that draws upon both external and internal data, standards, and resources to inform and make decisions about one's performance” (p. 1). Sargeant also stresses the point that self-assessment should be guided by evaluative criteria: “Multiple external sources can and should inform self-assessment, perhaps most important among them performance standards” (p. 1). Now we are talking about the how of self-assessment, which demands an operationalization of self-assessment practice. Let us examine each object of self-assessment (competence, processes, and/or products) with an eye for what is assessed and why.

What is Self-Assessed?

Monitoring and self-assessing processes are practically synonymous with self-regulated learning (SRL), or at least central components of it such as goal-setting and monitoring, or metacognition. Research on SRL has clearly shown that self-generated feedback on one's approach to learning is associated with academic gains ( Zimmerman and Schunk, 2011 ). Self-assessment of the products , such as papers and presentations, are the easiest to defend as feedback, especially when those self-assessments are grounded in explicit, relevant, evaluative criteria and followed by opportunities to relearn and/or revise ( Andrade, 2010 ).

Including the self-assessment of competence in this definition is a little trickier. I hesitated to include it because of the risk of sneaking in global assessments of one's overall ability, self-esteem, and self-concept (“I'm good enough, I'm smart enough, and doggone it, people like me,” Franken, 1992 ), which do not seem relevant to a discussion of feedback in the context of learning. Research on global self-assessment, or self-perception, is popular in the medical education literature, but even there, scholars have begun to question its usefulness in terms of influencing learning and professional growth (e.g., see Sargeant et al., 2008 ). Eva and Regehr (2008) seem to agree in the following passage, which states the case in a way that makes it worthy of a long quotation:

Self-assessment is often (implicitly or otherwise) conceptualized as a personal, unguided reflection on performance for the purposes of generating an individually derived summary of one's own level of knowledge, skill, and understanding in a particular area. For example, this conceptualization would appear to be the only reasonable basis for studies that fit into what Colliver et al. (2005) has described as the “guess your grade” model of self-assessment research, the results of which form the core foundation for the recurring conclusion that self-assessment is generally poor. This unguided, internally generated construction of self-assessment stands in stark contrast to the model put forward by Boud (1999) , who argued that the phrase self-assessment should not imply an isolated or individualistic activity; it should commonly involve peers, teachers, and other sources of information. The conceptualization of self-assessment as enunciated in Boud's description would appear to involve a process by which one takes personal responsibility for looking outward, explicitly seeking feedback, and information from external sources, then using these externally generated sources of assessment data to direct performance improvements. In this construction, self-assessment is more of a pedagogical strategy than an ability to judge for oneself; it is a habit that one needs to acquire and enact rather than an ability that one needs to master (p. 15).

As in the K-16 context, self-assessment is coming to be seen as having value as much or more so in terms of pedagogy as in assessment ( Silver et al., 2008 ; Brown and Harris, 2014 ). In the end, however, I decided that self-assessing one's competence to successfully learn a particular concept or complete a particular task (which sounds a lot like self-efficacy—more on that later) might be useful feedback because it can inform decisions about how to proceed, such as the amount of time to invest in learning how to play the flute, or whether or not to seek help learning the steps of the jitterbug. An important caveat, however, is that self-assessments of competence are only useful if students have opportunities to do something about their perceived low competence—that is, it serves the purpose of formative feedback for the learner.

How to Self-Assess?

Panadero et al. (2016a) summarized five very different taxonomies of self-assessment and called for the development of a comprehensive typology that considers, among other things, its purpose, the presence or absence of criteria, and the method. In response, I propose the taxonomy depicted in Table 1 , which focuses on the what (competence, process, or product), the why (formative or summative), and the how (methods, including whether or not they include standards, e.g., criteria) of self-assessment. The collections of examples of methods in the table is inexhaustive.

I put the methods in Table 1 where I think they belong, but many of them could be placed in more than one cell. Take self-efficacy , for instance, which is essentially a self-assessment of one's competence to successfully undertake a particular task ( Bandura, 1997 ). Summative judgments of self-efficacy are certainly possible but they seem like a silly thing to do—what is the point, from a learning perspective? Formative self-efficacy judgments, on the other hand, can inform next steps in learning and skill building. There is reason to believe that monitoring and making adjustments to one's self-efficacy (e.g., by setting goals or attributing success to effort) can be productive ( Zimmerman, 2000 ), so I placed self-efficacy in the formative row.

It is important to emphasize that self-efficacy is task-specific, more or less ( Bandura, 1997 ). This taxonomy does not include general, holistic evaluations of one's abilities, for example, “I am good at math.” Global assessment of competence does not provide the leverage, in terms of feedback, that is provided by task-specific assessments of competence, that is, self-efficacy. Eva and Regehr (2008) provided an illustrative example: “We suspect most people are prompted to open a dictionary as a result of encountering a word for which they are uncertain of the meaning rather than out of a broader assessment that their vocabulary could be improved” (p. 16). The exclusion of global evaluations of oneself resonates with research that clearly shows that feedback that focuses on aspects of a task (e.g., “I did not solve most of the algebra problems”) is more effective than feedback that focuses on the self (e.g., “I am bad at math”) ( Kluger and DeNisi, 1996 ; Dweck, 2006 ; Hattie and Timperley, 2007 ). Hence, global self-evaluations of ability or competence do not appear in Table 1 .

Another approach to student self-assessment that could be placed in more than one cell is traffic lights . The term traffic lights refers to asking students to use green, yellow, or red objects (or thumbs up, sideways, or down—anything will do) to indicate whether they think they have good, partial, or little understanding ( Black et al., 2003 ). It would be appropriate for traffic lights to appear in multiple places in Table 1 , depending on how they are used. Traffic lights seem to be most effective at supporting students' reflections on how well they understand a concept or have mastered a skill, which is line with their creators' original intent, so they are categorized as formative self-assessments of one's learning—which sounds like metacognition.

In fact, several of the methods included in Table 1 come from research on metacognition, including self-monitoring , such as checking one's reading comprehension, and self-testing , e.g., checking one's performance on test items. These last two methods have been excluded from some taxonomies of self-assessment (e.g., Boud and Brew, 1995 ) because they do not engage students in explicitly considering relevant standards or criteria. However, new conceptions of self-assessment are grounded in theories of the self- and co-regulation of learning ( Andrade and Brookhart, 2016 ), which includes self-monitoring of learning processes with and without explicit standards.

However, my research favors self-assessment with regard to standards ( Andrade and Boulay, 2003 ; Andrade and Du, 2007 ; Andrade et al., 2008 , 2009 , 2010 ), as does related research by Panadero and his colleagues (see below). I have involved students in self-assessment of stories, essays, or mathematical word problems according to rubrics or checklists with criteria. For example, two studies investigated the relationship between elementary or middle school students' scores on a written assignment and a process that involved them in reading a model paper, co-creating criteria, self-assessing first drafts with a rubric, and revising ( Andrade et al., 2008 , 2010 ). The self-assessment was highly scaffolded: students were asked to underline key phrases in the rubric with colored pencils (e.g., underline “clearly states an opinion” in blue), then underline or circle in their drafts the evidence of having met the standard articulated by the phrase (e.g., his or her opinion) with the same blue pencil. If students found they had not met the standard, they were asked to write themselves a reminder to make improvements when they wrote their final drafts. This process was followed for each criterion on the rubric. There were main effects on scores for every self-assessed criterion on the rubric, suggesting that guided self-assessment according to the co-created criteria helped students produce more effective writing.

Panadero and his colleagues have also done quasi-experimental and experimental research on standards-referenced self-assessment, using rubrics or lists of assessment criteria that are presented in the form of questions ( Panadero et al., 2012 , 2013 , 2014 ; Panadero and Romero, 2014 ). Panadero calls the list of assessment criteria a script because his work is grounded in research on scaffolding (e.g., Kollar et al., 2006 ): I call it a checklist because that is the term used in classroom assessment contexts. Either way, the list provides standards for the task. Here is a script for a written summary that Panadero et al. (2014) used with college students in a psychology class:

• Does my summary transmit the main idea from the text? Is it at the beginning of my summary?

• Are the important ideas also in my summary?

• Have I selected the main ideas from the text to make them explicit in my summary?

• Have I thought about my purpose for the summary? What is my goal?

Taken together, the results of the studies cited above suggest that students who engaged in self-assessment using scripts or rubrics were more self-regulated, as measured by self-report questionnaires and/or think aloud protocols, than were students in the comparison or control groups. Effect sizes were very small to moderate (η 2 = 0.06–0.42), and statistically significant. Most interesting, perhaps, is one study ( Panadero and Romero, 2014 ) that demonstrated an association between rubric-referenced self-assessment activities and all three phases of SRL; forethought, performance, and reflection.

There are surely many other methods of self-assessment to include in Table 1 , as well as interesting conversations to be had about which method goes where and why. In the meantime, I offer the taxonomy in Table 1 as a way to define and operationalize self-assessment in instructional contexts and as a framework for the following overview of current research on the subject.

An Overview of Current Research on Self-Assessment

Several recent reviews of self-assessment are available ( Brown and Harris, 2013 ; Brown et al., 2015 ; Panadero et al., 2017 ), so I will not summarize the entire body of research here. Instead, I chose to take a birds-eye view of the field, with goal of reporting on what has been sufficiently researched and what remains to be done. I used the references lists from reviews, as well as other relevant sources, as a starting point. In order to update the list of sources, I directed two new searches 1 , the first of the ERIC database, and the second of both ERIC and PsychINFO. Both searches included two search terms, “self-assessment” OR “self-evaluation.” Advanced search options had four delimiters: (1) peer-reviewed, (2) January, 2013–October, 2016 and then October 2016–March 2019, (3) English, and (4) full-text. Because the focus was on K-20 educational contexts, sources were excluded if they were about early childhood education or professional development.

The first search yielded 347 hits; the second 1,163. Research that was unrelated to instructional feedback was excluded, such as studies limited to self-estimates of performance before or after taking a test, guesses about whether a test item was answered correctly, and estimates of how many tasks could be completed in a certain amount of time. Although some of the excluded studies might be thought of as useful investigations of self-monitoring, as a group they seemed too unrelated to theories of self-generated feedback to be appropriate for this review. Seventy-six studies were selected for inclusion in Table S1 (Supplementary Material), which also contains a few studies published before 2013 that were not included in key reviews, as well as studies solicited directly from authors.

The Table S1 in the Supplementary Material contains a complete list of studies included in this review, organized by the focus or topic of the study, as well as brief descriptions of each. The “type” column Table S1 (Supplementary Material) indicates whether the study focused on formative or summative self-assessment. This distinction was often difficult to make due to a lack of information. For example, Memis and Seven (2015) frame their study in terms of formative assessment, and note that the purpose of the self-evaluation done by the sixth grade students is to “help students improve their [science] reports” (p. 39), but they do not indicate how the self-assessments were done, nor whether students were given time to revise their reports based on their judgments or supported in making revisions. A sentence or two of explanation about the process of self-assessment in the procedures sections of published studies would be most useful.

Figure 1 graphically represents the number of studies in the four most common topic categories found in the table—achievement, consistency, student perceptions, and SRL. The figure reveals that research on self-assessment is on the rise, with consistency the most popular topic. Of the 76 studies in the table in the appendix, 44 were inquiries into the consistency of students' self-assessments with other judgments (e.g., a test score or teacher's grade). Twenty-five studies investigated the relationship between self-assessment and achievement. Fifteen explored students' perceptions of self-assessment. Twelve studies focused on the association between self-assessment and self-regulated learning. One examined self-efficacy, and two qualitative studies documented the mental processes involved in self-assessment. The sum ( n = 99) of the list of research topics is more than 76 because several studies had multiple foci. In the remainder of this review I examine each topic in turn.

www.frontiersin.org

Figure 1 . Topics of self-assessment studies, 2013–2018.

Consistency

Table S1 (Supplementary Material) reveals that much of the recent research on self-assessment has investigated the accuracy or, more accurately, consistency, of students' self-assessments. The term consistency is more appropriate in the classroom context because the quality of students' self-assessments is often determined by comparing them with their teachers' assessments and then generating correlations. Given the evidence of the unreliability of teachers' grades ( Falchikov, 2005 ), the assumption that teachers' assessments are accurate might not be well-founded ( Leach, 2012 ; Brown et al., 2015 ). Ratings of student work done by researchers are also suspect, unless evidence of the validity and reliability of the inferences made about student work by researchers is available. Consequently, much of the research on classroom-based self-assessment should use the term consistency , which refers to the degree of alignment between students' and expert raters' evaluations, avoiding the purer, more rigorous term accuracy unless it is fitting.

In their review, Brown and Harris (2013) reported that correlations between student self-ratings and other measures tended to be weakly to strongly positive, ranging from r ≈ 0.20 to 0.80, with few studies reporting correlations >0.60. But their review included results from studies of any self-appraisal of school work, including summative self-rating/grading, predictions about the correctness of answers on test items, and formative, criteria-based self-assessments, a combination of methods that makes the correlations they reported difficult to interpret. Qualitatively different forms of self-assessment, especially summative and formative types, cannot be lumped together without obfuscating important aspects of self-assessment as feedback.

Given my concern about combining studies of summative and formative assessment, you might anticipate a call for research on consistency that distinguishes between the two. I will make no such call for three reasons. One is that we have enough research on the subject, including the 22 studies in Table S1 (Supplementary Material) that were published after Brown and Harris's review (2013 ). Drawing only on studies included in Table S1 (Supplementary Material), we can say with confidence that summative self-assessment tends to be inconsistent with external judgements ( Baxter and Norman, 2011 ; De Grez et al., 2012 ; Admiraal et al., 2015 ), with males tending to overrate and females to underrate ( Nowell and Alston, 2007 ; Marks et al., 2018 ). There are exceptions ( Alaoutinen, 2012 ; Lopez-Pastor et al., 2012 ) as well as mixed results, with students being consistent regarding some aspects of their learning but not others ( Blanch-Hartigan, 2011 ; Harding and Hbaci, 2015 ; Nguyen and Foster, 2018 ). We can also say that older, more academically competent learners tend to be more consistent ( Hacker et al., 2000 ; Lew et al., 2010 ; Alaoutinen, 2012 ; Guillory and Blankson, 2017 ; Butler, 2018 ; Nagel and Lindsey, 2018 ). There is evidence that consistency can be improved through experience ( Lopez and Kossack, 2007 ; Yilmaz, 2017 ; Nagel and Lindsey, 2018 ), the use of guidelines ( Bol et al., 2012 ), feedback ( Thawabieh, 2017 ), and standards ( Baars et al., 2014 ), perhaps in the form of rubrics ( Panadero and Romero, 2014 ). Modeling and feedback also help ( Labuhn et al., 2010 ; Miller and Geraci, 2011 ; Hawkins et al., 2012 ; Kostons et al., 2012 ).

An outcome typical of research on the consistency of summative self-assessment can be found in row 59, which summarizes the study by Tejeiro et al. (2012) discussed earlier: Students' self-assessments were higher than marks given by professors, especially for students with poorer results, and no relationship was found between the professors' and the students' assessments in the group in which self-assessment counted toward the final mark. Students are not stupid: if they know that they can influence their final grade, and that their judgment is summative rather than intended to inform revision and improvement, they will be motivated to inflate their self-evaluation. I do not believe we need more research to demonstrate that phenomenon.

The second reason I am not calling for additional research on consistency is a lot of it seems somewhat irrelevant. This might be because the interest in accuracy is rooted in clinical research on calibration, which has very different aims. Calibration accuracy is the “magnitude of consent between learners' true and self-evaluated task performance. Accurately calibrated learners' task performance equals their self-evaluated task performance” ( Wollenschläger et al., 2016 ). Calibration research often asks study participants to predict or postdict the correctness of their responses to test items. I caution about generalizing from clinical experiments to authentic classroom contexts because the dismal picture of our human potential to self-judge was painted by calibration researchers before study participants were effectively taught how to predict with accuracy, or provided with the tools they needed to be accurate, or motivated to do so. Calibration researchers know that, of course, and have conducted intervention studies that attempt to improve accuracy, with some success (e.g., Bol et al., 2012 ). Studies of formative self-assessment also suggest that consistency increases when it is taught and supported in many of the ways any other skill must be taught and supported ( Lopez and Kossack, 2007 ; Labuhn et al., 2010 ; Chang et al., 2012 , 2013 ; Hawkins et al., 2012 ; Panadero and Romero, 2014 ; Lin-Siegler et al., 2015 ; Fitzpatrick and Schulz, 2016 ).

Even clinical psychological studies that go beyond calibration to examine the associations between monitoring accuracy and subsequent study behaviors do not transfer well to classroom assessment research. After repeatedly encountering claims that, for example, low self-assessment accuracy leads to poor task-selection accuracy and “suboptimal learning outcomes” ( Raaijmakers et al., 2019 , p. 1), I dug into the cited studies and discovered two limitations. The first is that the tasks in which study participants engage are quite inauthentic. A typical task involves studying “word pairs (e.g., railroad—mother), followed by a delayed judgment of learning (JOL) in which the students predicted the chances of remembering the pair… After making a JOL, the entire pair was presented for restudy for 4 s [ sic ], and after all pairs had been restudied, a criterion test of paired-associate recall occurred” ( Dunlosky and Rawson, 2012 , p. 272). Although memory for word pairs might be important in some classroom contexts, it is not safe to assume that results from studies like that one can predict students' behaviors after criterion-referenced self-assessment of their comprehension of complex texts, lengthy compositions, or solutions to multi-step mathematical problems.

The second limitation of studies like the typical one described above is more serious: Participants in research like that are not permitted to regulate their own studying, which is experimentally manipulated by a computer program. This came as a surprise, since many of the claims were about students' poor study choices but they were rarely allowed to make actual choices. For example, Dunlosky and Rawson (2012) permitted participants to “use monitoring to effectively control learning” by programming the computer so that “a participant would need to have judged his or her recall of a definition entirely correct on three different trials, and once they judged it entirely correct on the third trial, that particular key term definition was dropped [by the computer program] from further practice” (p. 272). The authors note that this study design is an improvement on designs that did not require all participants to use the same regulation algorithm, but it does not reflect the kinds of decisions that learners make in class or while doing homework. In fact, a large body of research shows that students can make wise choices when they self-pace the study of to-be-learned materials and then allocate study time to each item ( Bjork et al., 2013 , p. 425):

In a typical experiment, the students first study all the items at an experimenter-paced rate (e.g., study 60 paired associates for 3 s each), which familiarizes the students with the items; after this familiarity phase, the students then either choose which items they want to restudy (e.g., all items are presented in an array, and the students select which ones to restudy) and/or pace their restudy of each item. Several dependent measures have been widely used, such as how long each item is studied, whether an item is selected for restudy, and in what order items are selected for restudy. The literature on these aspects of self-regulated study is massive (for a comprehensive overview, see both Dunlosky and Ariel, 2011 and Son and Metcalfe, 2000 ), but the evidence is largely consistent with a few basic conclusions. First, if students have a chance to practice retrieval prior to restudying items, they almost exclusively choose to restudy unrecalled items and drop the previously recalled items from restudy ( Metcalfe and Kornell, 2005 ). Second, when pacing their study of individual items that have been selected for restudy, students typically spend more time studying items that are more, rather than less, difficult to learn. Such a strategy is consistent with a discrepancy-reduction model of self-paced study (which states that people continue to study an item until they reach mastery), although some key revisions to this model are needed to account for all the data. For instance, students may not continue to study until they reach some static criterion of mastery, but instead, they may continue to study until they perceive that they are no longer making progress.

I propose that this research, which suggests that students' unscaffolded, unmeasured, informal self-assessments tend to lead to appropriate task selection, is better aligned with research on classroom-based self-assessment. Nonetheless, even this comparison is inadequate because the study participants were not taught to compare their performance to the criteria for mastery, as is often done in classroom-based self-assessment.

The third and final reason I do not believe we need additional research on consistency is that I think it is a distraction from the true purposes of self-assessment. Many if not most of the articles about the accuracy of self-assessment are grounded in the assumption that accuracy is necessary for self-assessment to be useful, particularly in terms of subsequent studying and revision behaviors. Although it seems obvious that accurate evaluations of their performance positively influence students' study strategy selection, which should produce improvements in achievement, I have not seen relevant research that tests those conjectures. Some claim that inaccurate estimates of learning lead to the selection of inappropriate learning tasks ( Kostons et al., 2012 ) but they cite research that does not support their claim. For example, Kostons et al. cite studies that focus on the effectiveness of SRL interventions but do not address the accuracy of participants' estimates of learning, nor the relationship of those estimates to the selection of next steps. Other studies produce findings that support my skepticism. Take, for instance, two relevant studies of calibration. One suggested that performance and judgments of performance had little influence on subsequent test preparation behavior ( Hacker et al., 2000 ), and the other showed that study participants followed their predictions of performance to the same degree, regardless of monitoring accuracy ( van Loon et al., 2014 ).

Eva and Regehr (2008) believe that:

Research questions that take the form of “How well do various practitioners self-assess?” “How can we improve self-assessment?” or “How can we measure self-assessment skill?” should be considered defunct and removed from the research agenda [because] there have been hundreds of studies into these questions and the answers are “Poorly,” “You can't,” and “Don't bother” (p. 18).

I almost agree. A study that could change my mind about the importance of accuracy of self-assessment would be an investigation that goes beyond attempting to improve accuracy just for the sake of accuracy by instead examining the relearning/revision behaviors of accurate and inaccurate self-assessors: Do students whose self-assessments match the valid and reliable judgments of expert raters (hence my use of the term accuracy ) make better decisions about what they need to do to deepen their learning and improve their work? Here, I admit, is a call for research related to consistency: I would love to see a high-quality investigation of the relationship between accuracy in formative self-assessment, and students' subsequent study and revision behaviors, and their learning. For example, a study that closely examines the revisions to writing made by accurate and inaccurate self-assessors, and the resulting outcomes in terms of the quality of their writing, would be most welcome.

Table S1 (Supplementary Material) indicates that by 2018 researchers began publishing studies that more directly address the hypothesized link between self-assessment and subsequent learning behaviors, as well as important questions about the processes learners engage in while self-assessing ( Yan and Brown, 2017 ). One, a study by Nugteren et al. (2018 row 19 in Table S1 (Supplementary Material)), asked “How do inaccurate [summative] self-assessments influence task selections?” (p. 368) and employed a clever exploratory research design. The results suggested that most of the 15 students in their sample over-estimated their performance and made inaccurate learning-task selections. Nugteren et al. recommended helping students make more accurate self-assessments, but I think the more interesting finding is related to why students made task selections that were too difficult or too easy, given their prior performance: They based most task selections on interest in the content of particular items (not the overarching content to be learned), and infrequently considered task difficulty and support level. For instance, while working on the genetics tasks, students reported selecting tasks because they were fun or interesting, not because they addressed self-identified weaknesses in their understanding of genetics. Nugteren et al. proposed that students would benefit from instruction on task selection. I second that proposal: Rather than directing our efforts on accuracy in the service of improving subsequent task selection, let us simply teach students to use the information at hand to select next best steps, among other things.

Butler (2018 , row 76 in Table S1 (Supplementary Material)) has conducted at least two studies of learners' processes of responding to self-assessment items and how they arrived at their judgments. Comparing generic, decontextualized items to task-specific, contextualized items (which she calls after-task items ), she drew two unsurprising conclusions: the task-specific items “generally showed higher correlations with task performance,” and older students “appeared to be more conservative in their judgment compared with their younger counterparts” (p. 249). The contribution of the study is the detailed information it provides about how students generated their judgments. For example, Butler's qualitative data analyses revealed that when asked to self-assess in terms of vague or non-specific items, the children often “contextualized the descriptions based on their own experiences, goals, and expectations,” (p. 257) focused on the task at hand, and situated items in the specific task context. Perhaps as a result, the correlation between after-task self-assessment and task performance was generally higher than for generic self-assessment.

Butler (2018) notes that her study enriches our empirical understanding of the processes by which children respond to self-assessment. This is a very promising direction for the field. Similar studies of processing during formative self-assessment of a variety of task types in a classroom context would likely produce significant advances in our understanding of how and why self-assessment influences learning and performance.

Student Perceptions

Fifteen of the studies listed in Table S1 (Supplementary Material) focused on students' perceptions of self-assessment. The studies of children suggest that they tend to have unsophisticated understandings of its purposes ( Harris and Brown, 2013 ; Bourke, 2016 ) that might lead to shallow implementation of related processes. In contrast, results from the studies conducted in higher education settings suggested that college and university students understood the function of self-assessment ( Ratminingsih et al., 2018 ) and generally found it to be useful for guiding evaluation and revision ( Micán and Medina, 2017 ), understanding how to take responsibility for learning ( Lopez and Kossack, 2007 ; Bourke, 2014 ; Ndoye, 2017 ), prompting them to think more critically and deeply ( van Helvoort, 2012 ; Siow, 2015 ), applying newfound skills ( Murakami et al., 2012 ), and fostering self-regulated learning by guiding them to set goals, plan, self-monitor and reflect ( Wang, 2017 ).

Not surprisingly, positive perceptions of self-assessment were typically developed by students who actively engaged the formative type by, for example, developing their own criteria for an effective self-assessment response ( Bourke, 2014 ), or using a rubric or checklist to guide their assessments and then revising their work ( Huang and Gui, 2015 ; Wang, 2017 ). Earlier research suggested that children's attitudes toward self-assessment can become negative if it is summative ( Ross et al., 1998 ). However, even summative self-assessment was reported by adult learners to be useful in helping them become more critical of their own and others' writing throughout the course and in subsequent courses ( van Helvoort, 2012 ).

Achievement

Twenty-five of the studies in Table S1 (Supplementary Material) investigated the relation between self-assessment and achievement, including two meta-analyses. Twenty of the 25 clearly employed the formative type. Without exception, those 20 studies, plus the two meta-analyses ( Graham et al., 2015 ; Sanchez et al., 2017 ) demonstrated a positive association between self-assessment and learning. The meta-analysis conducted by Graham and his colleagues, which included 10 studies, yielded an average weighted effect size of 0.62 on writing quality. The Sanchez et al. meta-analysis revealed that, although 12 of the 44 effect sizes were negative, on average, “students who engaged in self-grading performed better ( g = 0.34) on subsequent tests than did students who did not” (p. 1,049).

All but two of the non-meta-analytic studies of achievement in Table S1 (Supplementary Material) were quasi-experimental or experimental, providing relatively rigorous evidence that their treatment groups outperformed their comparison or control groups in terms of everything from writing to dart-throwing, map-making, speaking English, and exams in a wide variety of disciplines. One experiment on summative self-assessment ( Miller and Geraci, 2011 ), in contrast, resulted in no improvements in exam scores, while the other one did ( Raaijmakers et al., 2017 ).

It would be easy to overgeneralize and claim that the question about the effect of self-assessment on learning has been answered, but there are unanswered questions about the key components of effective self-assessment, especially social-emotional components related to power and trust ( Andrade and Brown, 2016 ). The trends are pretty clear, however: it appears that formative forms of self-assessment can promote knowledge and skill development. This is not surprising, given that it involves many of the processes known to support learning, including practice, feedback, revision, and especially the intellectually demanding work of making complex, criteria-referenced judgments ( Panadero et al., 2014 ). Boud (1995a , b) predicted this trend when he noted that many self-assessment processes undermine learning by rushing to judgment, thereby failing to engage students with the standards or criteria for their work.

Self-Regulated Learning

The association between self-assessment and learning has also been explained in terms of self-regulation ( Andrade, 2010 ; Panadero and Alonso-Tapia, 2013 ; Andrade and Brookhart, 2016 , 2019 ; Panadero et al., 2016b ). Self-regulated learning (SRL) occurs when learners set goals and then monitor and manage their thoughts, feelings, and actions to reach those goals. SRL is moderately to highly correlated with achievement ( Zimmerman and Schunk, 2011 ). Research suggests that formative assessment is a potential influence on SRL ( Nicol and Macfarlane-Dick, 2006 ). The 12 studies in Table S1 (Supplementary Material) that focus on SRL demonstrate the recent increase in interest in the relationship between self-assessment and SRL.

Conceptual and practical overlaps between the two fields are abundant. In fact, Brown and Harris (2014) recommend that student self-assessment no longer be treated as an assessment, but as an essential competence for self-regulation. Butler and Winne (1995) introduced the role of self-generated feedback in self-regulation years ago:

[For] all self-regulated activities, feedback is an inherent catalyst. As learners monitor their engagement with tasks, internal feedback is generated by the monitoring process. That feedback describes the nature of outcomes and the qualities of the cognitive processes that led to those states (p. 245).

The outcomes and processes referred to by Butler and Winne are many of the same products and processes I referred to earlier in the definition of self-assessment and in Table 1 .

In general, research and practice related to self-assessment has tended to focus on judging the products of student learning, while scholarship on self-regulated learning encompasses both processes and products. The very practical focus of much of the research on self-assessment means it might be playing catch-up, in terms of theory development, with the SRL literature, which is grounded in experimental paradigms from cognitive psychology ( de Bruin and van Gog, 2012 ), while self-assessment research is ahead in terms of implementation (E. Panadero, personal communication, October 21, 2016). One major exception is the work done on Self-regulated Strategy Development ( Glaser and Brunstein, 2007 ; Harris et al., 2008 ), which has successfully integrated SRL research with classroom practices, including self-assessment, to teach writing to students with special needs.

Nicol and Macfarlane-Dick (2006) have been explicit about the potential for self-assessment practices to support self-regulated learning:

To develop systematically the learner's capacity for self-regulation, teachers need to create more structured opportunities for self-monitoring and the judging of progression to goals. Self-assessment tasks are an effective way of achieving this, as are activities that encourage reflection on learning progress (p. 207).

The studies of SRL in Table S1 (Supplementary Material) provide encouraging findings regarding the potential role of self-assessment in promoting achievement, self-regulated learning in general, and metacognition and study strategies related to task selection in particular. The studies also represent a solution to the “methodological and theoretical challenges involved in bringing metacognitive research to the real world, using meaningful learning materials” ( Koriat, 2012 , p. 296).

Future Directions for Research

I agree with ( Yan and Brown, 2017 ) statement that “from a pedagogical perspective, the benefits of self-assessment may come from active engagement in the learning process, rather than by being “veridical” or coinciding with reality, because students' reflection and metacognitive monitoring lead to improved learning” (p. 1,248). Future research should focus less on accuracy/consistency/veridicality, and more on the precise mechanisms of self-assessment ( Butler, 2018 ).

An important aspect of research on self-assessment that is not explicitly represented in Table S1 (Supplementary Material) is practice, or pedagogy: Under what conditions does self-assessment work best, and how are those conditions influenced by context? Fortunately, the studies listed in the table, as well as others (see especially Andrade and Valtcheva, 2009 ; Nielsen, 2014 ; Panadero et al., 2016a ), point toward an answer. But we still have questions about how best to scaffold effective formative self-assessment. One area of inquiry is about the characteristics of the task being assessed, and the standards or criteria used by learners during self-assessment.

Influence of Types of Tasks and Standards or Criteria

Type of task or competency assessed seems to matter (e.g., Dolosic, 2018 , Nguyen and Foster, 2018 ), as do the criteria ( Yilmaz, 2017 ), but we do not yet have a comprehensive understanding of how or why. There is some evidence that it is important that the criteria used to self-assess are concrete, task-specific ( Butler, 2018 ), and graduated. For example, Fastre et al. (2010) revealed an association between self-assessment according to task-specific criteria and task performance: In a quasi-experimental study of 39 novice vocational education students studying stoma care, they compared concrete, task-specific criteria (“performance-based criteria”) such as “Introduces herself to the patient” and “Consults the care file for details concerning the stoma” to vaguer, “competence-based criteria” such as “Shows interest, listens actively, shows empathy to the patient” and “Is discrete with sensitive topics.” The performance-based criteria group outperformed the competence-based group on tests of task performance, presumably because “performance-based criteria make it easier to distinguish levels of performance, enabling a step-by-step process of performance improvement” (p. 530).

This finding echoes the results of a study of self-regulated learning by Kitsantas and Zimmerman (2006) , who argued that “fine-grained standards can have two key benefits: They can enable learners to be more sensitive to small changes in skill and make more appropriate adaptations in learning strategies” (p. 203). In their study, 70 college students were taught how to throw darts at a target. The purpose of the study was to examine the role of graphing of self-recorded outcomes and self-evaluative standards in learning a motor skill. Students who were provided with graduated self-evaluative standards surpassed “those who were provided with absolute standards or no standards (control) in both motor skill and in motivational beliefs (i.e., self-efficacy, attributions, and self-satisfaction)” (p. 201). Kitsantas and Zimmerman hypothesized that setting high absolute standards would limit a learner's sensitivity to small improvements in functioning. This hypothesis was supported by the finding that students who set absolute standards reported significantly less awareness of learning progress (and hit the bull's-eye less often) than students who set graduated standards. “The correlation between the self-evaluation and dart-throwing outcomes measures was extraordinarily high ( r = 0.94)” (p. 210). Classroom-based research on specific, graduated self-assessment criteria would be informative.

Cognitive and Affective Mechanisms of Self-Assessment

There are many additional questions about pedagogy, such as the hoped-for investigation mentioned above of the relationship between accuracy in formative self-assessment, students' subsequent study behaviors, and their learning. There is also a need for research on how to help teachers give students a central role in their learning by creating space for self-assessment (e.g., see Hawe and Parr, 2014 ), and the complex power dynamics involved in doing so ( Tan, 2004 , 2009 ; Taras, 2008 ; Leach, 2012 ). However, there is an even more pressing need for investigations into the internal mechanisms experienced by students engaged in assessing their own learning. Angela Lui and I call this the next black box ( Lui, 2017 ).

Black and Wiliam (1998) used the term black box to emphasize the fact that what happened in most classrooms was largely unknown: all we knew was that some inputs (e.g., teachers, resources, standards, and requirements) were fed into the box, and that certain outputs (e.g., more knowledgeable and competent students, acceptable levels of achievement) would follow. But what, they asked, is happening inside, and what new inputs will produce better outputs? Black and Wiliam's review spawned a great deal of research on formative assessment, some but not all of which suggests a positive relationship with academic achievement ( Bennett, 2011 ; Kingston and Nash, 2011 ). To better understand why and how the use of formative assessment in general and self-assessment in particular is associated with improvements in academic achievement in some instances but not others, we need research that looks into the next black box: the cognitive and affective mechanisms of students who are engaged in assessment processes ( Lui, 2017 ).

The role of internal mechanisms has been discussed in theory but not yet fully tested. Crooks (1988) argued that the impact of assessment is influenced by students' interpretation of the tasks and results, and Butler and Winne (1995) theorized that both cognitive and affective processes play a role in determining how feedback is internalized and used to self-regulate learning. Other theoretical frameworks about the internal processes of receiving and responding to feedback have been developed (e.g., Nicol and Macfarlane-Dick, 2006 ; Draper, 2009 ; Andrade, 2013 ; Lipnevich et al., 2016 ). Yet, Shute (2008) noted in her review of the literature on formative feedback that “despite the plethora of research on the topic, the specific mechanisms relating feedback to learning are still mostly murky, with very few (if any) general conclusions” (p. 156). This area is ripe for research.

Self-assessment is the act of monitoring one's processes and products in order to make adjustments that deepen learning and enhance performance. Although it can be summative, the evidence presented in this review strongly suggests that self-assessment is most beneficial, in terms of both achievement and self-regulated learning, when it is used formatively and supported by training.

What is not yet clear is why and how self-assessment works. Those of you who like to investigate phenomena that are maddeningly difficult to measure will rejoice to hear that the cognitive and affective mechanisms of self-assessment are the next black box. Studies of the ways in which learners think and feel, the interactions between their thoughts and feelings and their context, and the implications for pedagogy will make major contributions to our field.

Author Contributions

The author confirms being the sole contributor of this work and has approved it for publication.

Conflict of Interest Statement

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/feduc.2019.00087/full#supplementary-material

1. ^ I am grateful to my graduate assistants, Joanna Weaver and Taja Young, for conducting the searches.

Admiraal, W., Huisman, B., and Pilli, O. (2015). Assessment in massive open online courses. Electron. J. e-Learning , 13, 207–216.

Google Scholar

Alaoutinen, S. (2012). Evaluating the effect of learning style and student background on self-assessment accuracy. Comput. Sci. Educ. 22, 175–198. doi: 10.1080/08993408.2012.692924

CrossRef Full Text | Google Scholar

Al-Rawahi, N. M., and Al-Balushi, S. M. (2015). The effect of reflective science journal writing on students' self-regulated learning strategies. Int. J. Environ. Sci. Educ. 10, 367–379. doi: 10.12973/ijese.2015.250a

Andrade, H. (2010). “Students as the definitive source of formative assessment: academic self-assessment and the self-regulation of learning,” in Handbook of Formative Assessment , eds H. Andrade and G. Cizek (New York, NY: Routledge, 90–105.

Andrade, H. (2013). “Classroom assessment in the context of learning theory and research,” in Sage Handbook of Research on Classroom Assessment , ed J. H. McMillan (New York, NY: Sage), 17–34. doi: 10.4135/9781452218649.n2

Andrade, H. (2018). “Feedback in the context of self-assessment,” in Cambridge Handbook of Instructional Feedback , eds A. Lipnevich and J. Smith (Cambridge: Cambridge University Press), 376–408.

PubMed Abstract

Andrade, H., and Boulay, B. (2003). The role of rubric-referenced self-assessment in learning to write. J. Educ. Res. 97, 21–34. doi: 10.1080/00220670309596625

Andrade, H., and Brookhart, S. (2019). Classroom assessment as the co-regulation of learning. Assessm. Educ. Principles Policy Pract. doi: 10.1080/0969594X.2019.1571992

Andrade, H., and Brookhart, S. M. (2016). “The role of classroom assessment in supporting self-regulated learning,” in Assessment for Learning: Meeting the Challenge of Implementation , eds D. Laveault and L. Allal (Heidelberg: Springer), 293–309. doi: 10.1007/978-3-319-39211-0_17

Andrade, H., and Du, Y. (2007). Student responses to criteria-referenced self-assessment. Assess. Evalu. High. Educ. 32, 159–181. doi: 10.1080/02602930600801928

Andrade, H., Du, Y., and Mycek, K. (2010). Rubric-referenced self-assessment and middle school students' writing. Assess. Educ. 17, 199–214. doi: 10.1080/09695941003696172

Andrade, H., Du, Y., and Wang, X. (2008). Putting rubrics to the test: The effect of a model, criteria generation, and rubric-referenced self-assessment on elementary school students' writing. Educ. Meas. 27, 3–13. doi: 10.1111/j.1745-3992.2008.00118.x

Andrade, H., and Valtcheva, A. (2009). Promoting learning and achievement through self- assessment. Theory Pract. 48, 12–19. doi: 10.1080/00405840802577544

Andrade, H., Wang, X., Du, Y., and Akawi, R. (2009). Rubric-referenced self-assessment and self-efficacy for writing. J. Educ. Res. 102, 287–302. doi: 10.3200/JOER.102.4.287-302

Andrade, H. L., and Brown, G. T. L. (2016). “Student self-assessment in the classroom,” in Handbook of Human and Social Conditions in Assessment , eds G. T. L. Brown and L. R. Harris (New York, NY: Routledge), 319–334.

PubMed Abstract | Google Scholar

Baars, M., Vink, S., van Gog, T., de Bruin, A., and Paas, F. (2014). Effects of training self-assessment and using assessment standards on retrospective and prospective monitoring of problem solving. Learn. Instruc. 33, 92–107. doi: 10.1016/j.learninstruc.2014.04.004

Balderas, I., and Cuamatzi, P. M. (2018). Self and peer correction to improve college students' writing skills. Profile. 20, 179–194. doi: 10.15446/profile.v20n2.67095

Bandura, A. (1997). Self-efficacy: The Exercise of Control . New York, NY: Freeman.

Barney, S., Khurum, M., Petersen, K., Unterkalmsteiner, M., and Jabangwe, R. (2012). Improving students with rubric-based self-assessment and oral feedback. IEEE Transac. Educ. 55, 319–325. doi: 10.1109/TE.2011.2172981

Baxter, P., and Norman, G. (2011). Self-assessment or self deception? A lack of association between nursing students' self-assessment and performance. J. Adv. Nurs. 67, 2406–2413. doi: 10.1111/j.1365-2648.2011.05658.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Bennett, R. E. (2011). Formative assessment: a critical review. Assess. Educ. 18, 5–25. doi: 10.1080/0969594X.2010.513678

Birjandi, P., and Hadidi Tamjid, N. (2012). The role of self-, peer and teacher assessment in promoting Iranian EFL learners' writing performance. Assess. Evalu. High. Educ. 37, 513–533. doi: 10.1080/02602938.2010.549204

Bjork, R. A., Dunlosky, J., and Kornell, N. (2013). Self-regulated learning: beliefs, techniques, and illusions. Annu. Rev. Psychol. 64, 417–444. doi: 10.1146/annurev-psych-113011-143823

Black, P., Harrison, C., Lee, C., Marshall, B., and Wiliam, D. (2003). Assessment for Learning: Putting it into Practice . Berkshire: Open University Press.

Black, P., and Wiliam, D. (1998). Inside the black box: raising standards through classroom assessment. Phi Delta Kappan 80, 139–144; 146–148.

Blanch-Hartigan, D. (2011). Medical students' self-assessment of performance: results from three meta-analyses. Patient Educ. Counsel. 84, 3–9. doi: 10.1016/j.pec.2010.06.037

Bol, L., Hacker, D. J., Walck, C. C., and Nunnery, J. A. (2012). The effects of individual or group guidelines on the calibration accuracy and achievement of high school biology students. Contemp. Educ. Psychol. 37, 280–287. doi: 10.1016/j.cedpsych.2012.02.004

Boud, D. (1995a). Implementing Student Self-Assessment, 2nd Edn. Australian Capital Territory: Higher Education Research and Development Society of Australasia.

Boud, D. (1995b). Enhancing Learning Through Self-Assessment. London: Kogan Page.

Boud, D. (1999). Avoiding the traps: Seeking good practice in the use of self-assessment and reflection in professional courses. Soc. Work Educ. 18, 121–132. doi: 10.1080/02615479911220131

Boud, D., and Brew, A. (1995). Developing a typology for learner self-assessment practices. Res. Dev. High. Educ. 18, 130–135.

Bourke, R. (2014). Self-assessment in professional programmes within tertiary institutions. Teach. High. Educ. 19, 908–918. doi: 10.1080/13562517.2014.934353

Bourke, R. (2016). Liberating the learner through self-assessment. Cambridge J. Educ. 46, 97–111. doi: 10.1080/0305764X.2015.1015963

Brown, G., Andrade, H., and Chen, F. (2015). Accuracy in student self-assessment: directions and cautions for research. Assess. Educ. 22, 444–457. doi: 10.1080/0969594X.2014.996523

Brown, G. T., and Harris, L. R. (2013). “Student self-assessment,” in Sage Handbook of Research on Classroom Assessment , ed J. H. McMillan (Los Angeles, CA: Sage), 367–393. doi: 10.4135/9781452218649.n21

Brown, G. T. L., and Harris, L. R. (2014). The future of self-assessment in classroom practice: reframing self-assessment as a core competency. Frontline Learn. Res. 3, 22–30. doi: 10.14786/flr.v2i1.24

Butler, D. L., and Winne, P. H. (1995). Feedback and self-regulated learning: a theoretical synthesis. Rev. Educ. Res. 65, 245–281. doi: 10.3102/00346543065003245

Butler, Y. G. (2018). “Young learners' processes and rationales for responding to self-assessment items: cases for generic can-do and five-point Likert-type formats,” in Useful Assessment and Evaluation in Language Education , eds J. Davis et al. (Washington, DC: Georgetown University Press), 21–39. doi: 10.2307/j.ctvvngrq.5

CrossRef Full Text

Chang, C.-C., Liang, C., and Chen, Y.-H. (2013). Is learner self-assessment reliable and valid in a Web-based portfolio environment for high school students? Comput. Educ. 60, 325–334. doi: 10.1016/j.compedu.2012.05.012

Chang, C.-C., Tseng, K.-H., and Lou, S.-J. (2012). A comparative analysis of the consistency and difference among teacher-assessment, student self-assessment and peer-assessment in a Web-based portfolio assessment environment for high school students. Comput. Educ. 58, 303–320. doi: 10.1016/j.compedu.2011.08.005

Colliver, J., Verhulst, S, and Barrows, H. (2005). Self-assessment in medical practice: a further concern about the conventional research paradigm. Teach. Learn. Med. 17, 200–201. doi: 10.1207/s15328015tlm1703_1

Crooks, T. J. (1988). The impact of classroom evaluation practices on students. Rev. Educ. Res. 58, 438–481. doi: 10.3102/00346543058004438

de Bruin, A. B. H., and van Gog, T. (2012). Improving self-monitoring and self-regulation: From cognitive psychology to the classroom , Learn. Instruct. 22, 245–252. doi: 10.1016/j.learninstruc.2012.01.003

De Grez, L., Valcke, M., and Roozen, I. (2012). How effective are self- and peer assessment of oral presentation skills compared with teachers' assessments? Active Learn. High. Educ. 13, 129–142. doi: 10.1177/1469787412441284

Dolosic, H. (2018). An examination of self-assessment and interconnected facets of second language reading. Read. Foreign Langu. 30, 189–208.

Draper, S. W. (2009). What are learners actually regulating when given feedback? Br. J. Educ. Technol. 40, 306–315. doi: 10.1111/j.1467-8535.2008.00930.x

Dunlosky, J., and Ariel, R. (2011). “Self-regulated learning and the allocation of study time,” in Psychology of Learning and Motivation , Vol. 54 ed B. Ross (Cambridge, MA: Academic Press), 103–140. doi: 10.1016/B978-0-12-385527-5.00004-8

Dunlosky, J., and Rawson, K. A. (2012). Overconfidence produces underachievement: inaccurate self evaluations undermine students' learning and retention. Learn. Instr. 22, 271–280. doi: 10.1016/j.learninstruc.2011.08.003

Dweck, C. (2006). Mindset: The New Psychology of Success. New York, NY: Random House.

Epstein, R. M., Siegel, D. J., and Silberman, J. (2008). Self-monitoring in clinical practice: a challenge for medical educators. J. Contin. Educ. Health Prof. 28, 5–13. doi: 10.1002/chp.149

Eva, K. W., and Regehr, G. (2008). “I'll never play professional football” and other fallacies of self-assessment. J. Contin. Educ. Health Prof. 28, 14–19. doi: 10.1002/chp.150

Falchikov, N. (2005). Improving Assessment Through Student Involvement: Practical Solutions for Aiding Learning in Higher and Further Education . London: Routledge Falmer.

Fastre, G. M. J., van der Klink, M. R., Sluijsmans, D., and van Merrienboer, J. J. G. (2012). Drawing students' attention to relevant assessment criteria: effects on self-assessment skills and performance. J. Voc. Educ. Train. 64, 185–198. doi: 10.1080/13636820.2011.630537

Fastre, G. M. J., van der Klink, M. R., and van Merrienboer, J. J. G. (2010). The effects of performance-based assessment criteria on student performance and self-assessment skills. Adv. Health Sci. Educ. 15, 517–532. doi: 10.1007/s10459-009-9215-x

Fitzpatrick, B., and Schulz, H. (2016). “Teaching young students to self-assess critically,” Paper presented at the Annual Meeting of the American Educational Research Association (Washington, DC).

Franken, A. S. (1992). I'm Good Enough, I'm Smart Enough, and Doggone it, People Like Me! Daily affirmations by Stuart Smalley. New York, NY: Dell.

Glaser, C., and Brunstein, J. C. (2007). Improving fourth-grade students' composition skills: effects of strategy instruction and self-regulation procedures. J. Educ. Psychol. 99, 297–310. doi: 10.1037/0022-0663.99.2.297

Gonida, E. N., and Leondari, A. (2011). Patterns of motivation among adolescents with biased and accurate self-efficacy beliefs. Int. J. Educ. Res. 50, 209–220. doi: 10.1016/j.ijer.2011.08.002

Graham, S., Hebert, M., and Harris, K. R. (2015). Formative assessment and writing. Elem. Sch. J. 115, 523–547. doi: 10.1086/681947

Guillory, J. J., and Blankson, A. N. (2017). Using recently acquired knowledge to self-assess understanding in the classroom. Sch. Teach. Learn. Psychol. 3, 77–89. doi: 10.1037/stl0000079

Hacker, D. J., Bol, L., Horgan, D. D., and Rakow, E. A. (2000). Test prediction and performance in a classroom context. J. Educ. Psychol. 92, 160–170. doi: 10.1037/0022-0663.92.1.160

Harding, J. L., and Hbaci, I. (2015). Evaluating pre-service teachers math teaching experience from different perspectives. Univ. J. Educ. Res. 3, 382–389. doi: 10.13189/ujer.2015.030605

Harris, K. R., Graham, S., Mason, L. H., and Friedlander, B. (2008). Powerful Writing Strategies for All Students . Baltimore, MD: Brookes.

Harris, L. R., and Brown, G. T. L. (2013). Opportunities and obstacles to consider when using peer- and self-assessment to improve student learning: case studies into teachers' implementation. Teach. Teach. Educ. 36, 101–111. doi: 10.1016/j.tate.2013.07.008

Hattie, J., and Timperley, H. (2007). The power of feedback. Rev. Educ. Res. 77, 81–112. doi: 10.3102/003465430298487

Hawe, E., and Parr, J. (2014). Assessment for learning in the writing classroom: an incomplete realization. Curr. J. 25, 210–237. doi: 10.1080/09585176.2013.862172

Hawkins, S. C., Osborne, A., Schofield, S. J., Pournaras, D. J., and Chester, J. F. (2012). Improving the accuracy of self-assessment of practical clinical skills using video feedback: the importance of including benchmarks. Med. Teach. 34, 279–284. doi: 10.3109/0142159X.2012.658897

Huang, Y., and Gui, M. (2015). Articulating teachers' expectations afore: Impact of rubrics on Chinese EFL learners' self-assessment and speaking ability. J. Educ. Train. Stud. 3, 126–132. doi: 10.11114/jets.v3i3.753

Kaderavek, J. N., Gillam, R. B., Ukrainetz, T. A., Justice, L. M., and Eisenberg, S. N. (2004). School-age children's self-assessment of oral narrative production. Commun. Disord. Q. 26, 37–48. doi: 10.1177/15257401040260010401

Karnilowicz, W. (2012). A comparison of self-assessment and tutor assessment of undergraduate psychology students. Soc. Behav. Person. 40, 591–604. doi: 10.2224/sbp.2012.40.4.591

Kevereski, L. (2017). (Self) evaluation of knowledge in students' population in higher education in Macedonia. Res. Pedag. 7, 69–75. doi: 10.17810/2015.49

Kingston, N. M., and Nash, B. (2011). Formative assessment: a meta-analysis and a call for research. Educ. Meas. 30, 28–37. doi: 10.1111/j.1745-3992.2011.00220.x

Kitsantas, A., and Zimmerman, B. J. (2006). Enhancing self-regulation of practice: the influence of graphing and self-evaluative standards. Metacogn. Learn. 1, 201–212. doi: 10.1007/s11409-006-9000-7

Kluger, A. N., and DeNisi, A. (1996). The effects of feedback interventions on performance: a historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychol. Bull. 119, 254–284. doi: 10.1037/0033-2909.119.2.254

Kollar, I., Fischer, F., and Hesse, F. (2006). Collaboration scripts: a conceptual analysis. Educ. Psychol. Rev. 18, 159–185. doi: 10.1007/s10648-006-9007-2

Kolovelonis, A., Goudas, M., and Dermitzaki, I. (2012). Students' performance calibration in a basketball dribbling task in elementary physical education. Int. Electron. J. Elem. Educ. 4, 507–517.

Koriat, A. (2012). The relationships between monitoring, regulation and performance. Learn. Instru. 22, 296–298. doi: 10.1016/j.learninstruc.2012.01.002

Kostons, D., van Gog, T., and Paas, F. (2012). Training self-assessment and task-selection skills: a cognitive approach to improving self-regulated learning. Learn. Instruc. 22, 121–132. doi: 10.1016/j.learninstruc.2011.08.004

Labuhn, A. S., Zimmerman, B. J., and Hasselhorn, M. (2010). Enhancing students' self-regulation and mathematics performance: the influence of feedback and self-evaluative standards Metacogn. Learn. 5, 173–194. doi: 10.1007/s11409-010-9056-2

Leach, L. (2012). Optional self-assessment: some tensions and dilemmas. Assess. Evalu. High. Educ. 37, 137–147. doi: 10.1080/02602938.2010.515013

Lew, M. D. N., Alwis, W. A. M., and Schmidt, H. G. (2010). Accuracy of students' self-assessment and their beliefs about its utility. Assess. Evalu. High. Educ. 35, 135–156. doi: 10.1080/02602930802687737

Lin-Siegler, X., Shaenfield, D., and Elder, A. D. (2015). Contrasting case instruction can improve self-assessment of writing. Educ. Technol. Res. Dev. 63, 517–537. doi: 10.1007/s11423-015-9390-9

Lipnevich, A. A., Berg, D. A. G., and Smith, J. K. (2016). “Toward a model of student response to feedback,” in The Handbook of Human and Social Conditions in Assessment , eds G. T. L. Brown and L. R. Harris (New York, NY: Routledge), 169–185.

Lopez, R., and Kossack, S. (2007). Effects of recurring use of self-assessment in university courses. Int. J. Learn. 14, 203–216. doi: 10.18848/1447-9494/CGP/v14i04/45277

Lopez-Pastor, V. M., Fernandez-Balboa, J.-M., Santos Pastor, M. L., and Aranda, A. F. (2012). Students' self-grading, professor's grading and negotiated final grading at three university programmes: analysis of reliability and grade difference ranges and tendencies. Assess. Evalu. High. Educ. 37, 453–464. doi: 10.1080/02602938.2010.545868

Lui, A. (2017). Validity of the responses to feedback survey: operationalizing and measuring students' cognitive and affective responses to teachers' feedback (Doctoral dissertation). University at Albany—SUNY: Albany NY.

Marks, M. B., Haug, J. C., and Hu, H. (2018). Investigating undergraduate business internships: do supervisor and self-evaluations differ? J. Educ. Bus. 93, 33–45. doi: 10.1080/08832323.2017.1414025

Memis, E. K., and Seven, S. (2015). Effects of an SWH approach and self-evaluation on sixth grade students' learning and retention of an electricity unit. Int. J. Prog. Educ. 11, 32–49.

Metcalfe, J., and Kornell, N. (2005). A region of proximal learning model of study time allocation. J. Mem. Langu. 52, 463–477. doi: 10.1016/j.jml.2004.12.001

Meusen-Beekman, K. D., Joosten-ten Brinke, D., and Boshuizen, H. P. A. (2016). Effects of formative assessments to develop self-regulation among sixth grade students: results from a randomized controlled intervention. Stud. Educ. Evalu. 51, 126–136. doi: 10.1016/j.stueduc.2016.10.008

Micán, D. A., and Medina, C. L. (2017). Boosting vocabulary learning through self-assessment in an English language teaching context. Assess. Evalu. High. Educ. 42, 398–414. doi: 10.1080/02602938.2015.1118433

Miller, T. M., and Geraci, L. (2011). Training metacognition in the classroom: the influence of incentives and feedback on exam predictions. Metacogn. Learn. 6, 303–314. doi: 10.1007/s11409-011-9083-7

Murakami, C., Valvona, C., and Broudy, D. (2012). Turning apathy into activeness in oral communication classes: regular self- and peer-assessment in a TBLT programme. System 40, 407–420. doi: 10.1016/j.system.2012.07.003

Nagel, M., and Lindsey, B. (2018). The use of classroom clickers to support improved self-assessment in introductory chemistry. J. College Sci. Teach. 47, 72–79.

Ndoye, A. (2017). Peer/self-assessment and student learning. Int. J. Teach. Learn. High. Educ. 29, 255–269.

Nguyen, T., and Foster, K. A. (2018). Research note—multiple time point course evaluation and student learning outcomes in an MSW course. J. Soc. Work Educ. 54, 715–723. doi: 10.1080/10437797.2018.1474151

Nicol, D., and Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Stud. High. Educ. 31, 199–218. doi: 10.1080/03075070600572090

Nielsen, K. (2014), Self-assessment methods in writing instruction: a conceptual framework, successful practices and essential strategies. J. Res. Read. 37, 1–16. doi: 10.1111/j.1467-9817.2012.01533.x.

Nowell, C., and Alston, R. M. (2007). I thought I got an A! Overconfidence across the economics curriculum. J. Econ. Educ. 38, 131–142. doi: 10.3200/JECE.38.2.131-142

Nugteren, M. L., Jarodzka, H., Kester, L., and Van Merriënboer, J. J. G. (2018). Self-regulation of secondary school students: self-assessments are inaccurate and insufficiently used for learning-task selection. Instruc. Sci. 46, 357–381. doi: 10.1007/s11251-018-9448-2

Panadero, E., and Alonso-Tapia, J. (2013). Self-assessment: theoretical and practical connotations. When it happens, how is it acquired and what to do to develop it in our students. Electron. J. Res. Educ. Psychol. 11, 551–576. doi: 10.14204/ejrep.30.12200

Panadero, E., Alonso-Tapia, J., and Huertas, J. A. (2012). Rubrics and self-assessment scripts effects on self-regulation, learning and self-efficacy in secondary education. Learn. Individ. Differ. 22, 806–813. doi: 10.1016/j.lindif.2012.04.007

Panadero, E., Alonso-Tapia, J., and Huertas, J. A. (2014). Rubrics vs. self-assessment scripts: effects on first year university students' self-regulation and performance. J. Study Educ. Dev. 3, 149–183. doi: 10.1080/02103702.2014.881655

Panadero, E., Alonso-Tapia, J., and Reche, E. (2013). Rubrics vs. self-assessment scripts effect on self-regulation, performance and self-efficacy in pre-service teachers. Stud. Educ. Evalu. 39, 125–132. doi: 10.1016/j.stueduc.2013.04.001

Panadero, E., Brown, G. L., and Strijbos, J.-W. (2016a). The future of student self-assessment: a review of known unknowns and potential directions. Educ. Psychol. Rev. 28, 803–830. doi: 10.1007/s10648-015-9350-2

Panadero, E., Jonsson, A., and Botella, J. (2017). Effects of self-assessment on self-regulated learning and self-efficacy: four meta-analyses. Educ. Res. Rev. 22, 74–98. doi: 10.1016/j.edurev.2017.08.004

Panadero, E., Jonsson, A., and Strijbos, J. W. (2016b). “Scaffolding self-regulated learning through self-assessment and peer assessment: guidelines for classroom implementation,” in Assessment for Learning: Meeting the Challenge of Implementation , eds D. Laveault and L. Allal (New York, NY: Springer), 311–326. doi: 10.1007/978-3-319-39211-0_18

Panadero, E., and Romero, M. (2014). To rubric or not to rubric? The effects of self-assessment on self-regulation, performance and self-efficacy. Assess. Educ. 21, 133–148. doi: 10.1080/0969594X.2013.877872

Papanthymou, A., and Darra, M. (2018). Student self-assessment in higher education: The international experience and the Greek example. World J. Educ. 8, 130–146. doi: 10.5430/wje.v8n6p130

Punhagui, G. C., and de Souza, N. A. (2013). Self-regulation in the learning process: actions through self-assessment activities with Brazilian students. Int. Educ. Stud. 6, 47–62. doi: 10.5539/ies.v6n10p47

Raaijmakers, S. F., Baars, M., Paas, F., van Merriënboer, J. J. G., and van Gog, T. (2019). Metacognition and Learning , 1–22. doi: 10.1007/s11409-019-09189-5

Raaijmakers, S. F., Baars, M., Schapp, L., Paas, F., van Merrienboer, J., and van Gog, T. (2017). Training self-regulated learning with video modeling examples: do task-selection skills transfer? Instr. Sci. 46, 273–290. doi: 10.1007/s11251-017-9434-0

Ratminingsih, N. M., Marhaeni, A. A. I. N., and Vigayanti, L. P. D. (2018). Self-assessment: the effect on students' independence and writing competence. Int. J. Instruc. 11, 277–290. doi: 10.12973/iji.2018.11320a

Ross, J. A., Rolheiser, C., and Hogaboam-Gray, A. (1998). “Impact of self-evaluation training on mathematics achievement in a cooperative learning environment,” Paper presented at the annual meeting of the American Educational Research Association (San Diego, CA).

Ross, J. A., and Starling, M. (2008). Self-assessment in a technology-supported environment: the case of grade 9 geography. Assess. Educ. 15, 183–199. doi: 10.1080/09695940802164218

Samaie, M., Nejad, A. M., and Qaracholloo, M. (2018). An inquiry into the efficiency of whatsapp for self- and peer-assessments of oral language proficiency. Br. J. Educ. Technol. 49, 111–126. doi: 10.1111/bjet.12519

Sanchez, C. E., Atkinson, K. M., Koenka, A. C., Moshontz, H., and Cooper, H. (2017). Self-grading and peer-grading for formative and summative assessments in 3rd through 12th grade classrooms: a meta-analysis. J. Educ. Psychol. 109, 1049–1066. doi: 10.1037/edu0000190

Sargeant, J. (2008). Toward a common understanding of self-assessment. J. Contin. Educ. Health Prof. 28, 1–4. doi: 10.1002/chp.148

Sargeant, J., Mann, K., van der Vleuten, C., and Metsemakers, J. (2008). “Directed” self-assessment: practice and feedback within a social context. J. Contin. Educ. Health Prof. 28, 47–54. doi: 10.1002/chp.155

Shute, V. (2008). Focus on formative feedback. Rev. Educ. Res. 78, 153–189. doi: 10.3102/0034654307313795

Silver, I., Campbell, C., Marlow, B., and Sargeant, J. (2008). Self-assessment and continuing professional development: the Canadian perspective. J. Contin. Educ. Health Prof. 28, 25–31. doi: 10.1002/chp.152

Siow, L.-F. (2015). Students' perceptions on self- and peer-assessment in enhancing learning experience. Malaysian Online J. Educ. Sci. 3, 21–35.

Son, L. K., and Metcalfe, J. (2000). Metacognitive and control strategies in study-time allocation. J. Exp. Psychol. 26, 204–221. doi: 10.1037/0278-7393.26.1.204

Tan, K. (2004). Does student self-assessment empower or discipline students? Assess. Evalu. Higher Educ. 29, 651–662. doi: 10.1080/0260293042000227209

Tan, K. (2009). Meanings and practices of power in academics' conceptions of student self-assessment. Teach. High. Educ. 14, 361–373. doi: 10.1080/13562510903050111

Taras, M. (2008). Issues of power and equity in two models of self-assessment. Teach. High. Educ. 13, 81–92. doi: 10.1080/13562510701794076

Tejeiro, R. A., Gomez-Vallecillo, J. L., Romero, A. F., Pelegrina, M., Wallace, A., and Emberley, E. (2012). Summative self-assessment in higher education: implications of its counting towards the final mark. Electron. J. Res. Educ. Psychol. 10, 789–812.

Thawabieh, A. M. (2017). A comparison between students' self-assessment and teachers' assessment. J. Curri. Teach. 6, 14–20. doi: 10.5430/jct.v6n1p14

Tulgar, A. T. (2017). Selfie@ssessment as an alternative form of self-assessment at undergraduate level in higher education. J. Langu. Linguis. Stud. 13, 321–335.

van Helvoort, A. A. J. (2012). How adult students in information studies use a scoring rubric for the development of their information literacy skills. J. Acad. Librarian. 38, 165–171. doi: 10.1016/j.acalib.2012.03.016

van Loon, M. H., de Bruin, A. B. H., van Gog, T., van Merriënboer, J. J. G., and Dunlosky, J. (2014). Can students evaluate their understanding of cause-and-effect relations? The effects of diagram completion on monitoring accuracy. Acta Psychol. 151, 143–154. doi: 10.1016/j.actpsy.2014.06.007

van Reybroeck, M., Penneman, J., Vidick, C., and Galand, B. (2017). Progressive treatment and self-assessment: Effects on students' automatisation of grammatical spelling and self-efficacy beliefs. Read. Writing 30, 1965–1985. doi: 10.1007/s11145-017-9761-1

Wang, W. (2017). Using rubrics in student self-assessment: student perceptions in the English as a foreign language writing context. Assess. Evalu. High. Educ. 42, 1280–1292. doi: 10.1080/02602938.2016.1261993

Wollenschläger, M., Hattie, J., Machts, N., Möller, J., and Harms, U. (2016). What makes rubrics effective in teacher-feedback? Transparency of learning goals is not enough. Contemp. Educ. Psychol. 44–45, 1–11. doi: 10.1016/j.cedpsych.2015.11.003

Yan, Z., and Brown, G. T. L. (2017). A cyclical self-assessment process: towards a model of how students engage in self-assessment. Assess. Evalu. High. Educ. 42, 1247–1262. doi: 10.1080/02602938.2016.1260091

Yilmaz, F. N. (2017). Reliability of scores obtained from self-, peer-, and teacher-assessments on teaching materials prepared by teacher candidates. Educ. Sci. 17, 395–409. doi: 10.12738/estp.2017.2.0098

Zimmerman, B. J. (2000). Self-efficacy: an essential motive to learn. Contemp. Educ. Psychol. 25, 82–91. doi: 10.1006/ceps.1999.1016

Zimmerman, B. J., and Schunk, D. H. (2011). “Self-regulated learning and performance: an introduction and overview,” in Handbook of Self-Regulation of Learning and Performance , eds B. J. Zimmerman and D. H. Schunk (New York, NY: Routledge), 1–14.

Keywords: self-assessment, self-evaluation, self-grading, formative assessment, classroom assessment, self-regulated learning (SRL)

Citation: Andrade HL (2019) A Critical Review of Research on Student Self-Assessment. Front. Educ. 4:87. doi: 10.3389/feduc.2019.00087

Received: 27 April 2019; Accepted: 02 August 2019; Published: 27 August 2019.

Reviewed by:

Copyright © 2019 Andrade. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Heidi L. Andrade, handrade@albany.edu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

IMAGES

  1. Article Critique

    summative assignment critique of research article

  2. Article Critique Research Paper Example

    summative assignment critique of research article

  3. Genuine Reasons for How to Critique a Research Article

    summative assignment critique of research article

  4. Research Article Critique Sample

    summative assignment critique of research article

  5. How to Write an Article Critique

    summative assignment critique of research article

  6. How to Write the Academic Critique Assignment--Critique of Academic Journal Article

    summative assignment critique of research article

VIDEO

  1. How to summarize the articles/papers by Paper Digest

  2. NSS

  3. How to critique research articles

  4. Summative Assignment Check in #2 Walkthrough UPDATED

  5. Summative Assignment Check in #1 Walkthrough UPDATED

  6. Summative Assignment Check in #4 Walkthrough UPDATED

COMMENTS

  1. Summative Assignment Summative Assignment- Critique of Research Article

    Running head: CRITIQUE OF RESEARCH ARTICLE. Summative Assignment: Critique of Research Article Nurs 350 Professor Stone West Coast University. Research Problem/Purpose. CRITIQUE OF RESEARCH ARTICLE. The article that I decided to critique is "The Long-Term Effects of Breastfeeding on Child and Adolescent Mental Health".

  2. Writing an article CRITIQUE

    A critique asks you to evaluate an article and the author's argument. You will need to look critically at what the author is claiming, evaluate the research methods, and look for possible problems with, or applications of, the researcher's claims. Introduction. Give an overview of the author's main points and how the author supports those ...

  3. Writing an Article Critique

    A summary of a research article requires you to share the key points of the article so your reader can get a clear picture of what the article is about. A critique may include a brief summary, but the main focus should be on your evaluation and analysis of the research itself. What steps need to be taken to write an article critique? Before you ...

  4. PDF Writing a Critique or Review of a Research Article

    Agreeing with, defending or confirming a particular point of view. Proposing a new point of view. Conceding to an existing point of view, but qualifying certain points. Reformulating an existing idea for a better explanation. Dismissing a point of view through an evaluation of its criteria. Reconciling two seemingly different points of view.

  5. How to Write an Article Critique Step-by-Step

    When you are reading an article, it is vital to take notes and critique the text to understand it fully and to be able to use the information in it. Here are the main steps for critiquing an article: Read the piece thoroughly, taking notes as you go. Ensure you understand the main points and the author's argument.

  6. Critical appraisal of published research papers

    INTRODUCTION. Critical appraisal of a research paper is defined as "The process of carefully and systematically examining research to judge its trustworthiness, value and relevance in a particular context."[] Since scientific literature is rapidly expanding with more than 12,000 articles being added to the MEDLINE database per week,[] critical appraisal is very important to distinguish ...

  7. Best practices in summative assessment

    The goal of this review is to highlight key elements underpinning excellent high-stakes summative assessment. This guide is primarily aimed at faculty members with the responsibility of assigning student grades and is intended to be a practical tool to help throughout the process of planning, developing, and deploying tests as well as monitoring their effectiveness. After a brief overview of ...

  8. Formative vs Summative Assessment

    The goal of summative assessment is to evaluate student learning at the end of an instructional unit by comparing it against some standard or benchmark. Summative assessments are often high stakes, which means that they have a high point value. Examples of summative assessments include: a midterm exam. a final project. a paper. a senior recital.

  9. Formative vs. summative assessment: impacts on academic motivation

    Introduction. In teaching and learning, assessment is defined as a procedure applied by instructors and students during instruction through which teachers provide necessary feedbacks to modify ongoing learning and teaching to develop learners' attainment of planned instructional aims (Robinowitz, 2010).According to Popham (), assessment is an intended procedure in which evidence of learners ...

  10. Summative Assessment in Higher Education: A Feedback for Better

    better results for both instructors and students. This paper will discuss different summative assessment techniques used in higher. education and their effectiveness in teaching and learning ...

  11. A Critical Review of Research on Student Self-Assessment

    This article is a review of research on student self-assessment conducted largely between 2013 and 2018. The purpose of the review is to provide an updated overview of theory and research. The treatment of theory involves articulating a refined definition and operationalization of self-assessment. The review of 76 empirical studies offers a ...

  12. PDF Research on Classroom Summative Assessment

    the purpose of summative assessment is to determine the student's overall achievement in a specific area of learning at a particular time—a purpose that distinguishes it from all other forms of assessment (Harlen, 2004). The accuracy of summative judgments depends on the quality of the assessments and 14 Research on Classroom Summative ...

  13. Full article: Implementing summative assessment with a formative

    The article draws on the reflections of the leading teacher, and argues that, for summative assessment to benefit learners, it should contain formative assessment elements. The teaching practices utilised in the case study provide some means to resolve the tensions between formative assessment and summative assessment that may be more generally ...

  14. Full article: How does assessment drive learning? A focus on students

    Working with peers towards a summative assignment. In a formal sense, assignments focus more on the summative aspects of assessment, as they are submitted individually for marking with less guidance from teachers than tutorial work. ... "The Hidden Curriculum Revisited: A Critical Review of Research into the Influence of Summative Assessment ...

  15. Study on the Effectiveness of Formative and Summative Assessment

    According to Beverley, form ative assessment involves the " teacher gathering, interpret ing. and acting of information about the students' learning, in order to improve the learning, during ...