中文 | ENGLISH

Half-Day Workshop in LAK 2019: Innovative problem solving assessment with learning analytics (call for papers)


 

The new webiste of the workshop is up. Please click here to redirect. 

 

 

Location:Tempe,Arizona, US

Time: March 4th 1:30pm-5:00pm

In Conjunction with the 9th International Learning Analytics and Knowledge (LAK) Mar 4-8 2019 

 

Workshop objectives

  • Participants are expected to gain the view of how learning analytics can facilitate problem solving assessment and the potential challenges
  • The workshop tries to explore how the researchers in learning science and the researchers in learning analytics work together create better measures for problem solving.  

 

Abstract

Solving dynamic and ill-structured problem is one of the most important skills for the 21st century. In addition, people often need to collaborate to solve problems together in real-life. Therefore, it is important to establish the assessments for evaluating both individual and collaborative problem solving ability for K12 students to ensure that the students are ready for dealing with real-life problems when they leave schools. To achieve this goal, learning scientists have designed various simulations to implement interactive and dynamic assessments. On the other hand, some techniques of learning analytics such as regression model, neural network, and hidden markov model have been used to analyze problem solving procedures. The workshop aims for further exploring how learning analytics could facilitate both of individual and collaborative problem-solving assessment through presentation, interactive event and roundtable discussion among the researchers with different backgrounds but the same interest. 

  

Background

Regardless of their occupations, people need to handle and solve different types of problems every day. Problem solving is the process of finding a method to achieve a goal from an initial state. However, real-life problems are usually ill-structured without clear goals and givens, so cannot be solved in a routine manner. Knowing how to solve problems in real-life situations has become an essential skill for the 21st century (Griffin, McGaw, & Care 2012; Greiff et al. 2014). The assessment of problem solving skill has some fundamental differences with the traditional assessment of curriculum content knowledge. Problem solving assessment has to be able to successfully assess students abilities in dealing with ill-structured and dynamics environments. It requires the assessment environment should change upon students behaviors and responses. Traditional static and paper-based assessment clearly fails to do so. The existing problem solving assessments usually provide students the dynamics situations by implementing a series of simulations (Zhang, Yu, Li, & Wang 2017; Schweizer, Wüstenberg, & Greiff 2013). Then problem solving performance is evaluated in terms of students outputs in the simulation. Although students behaviors, also called as process data, are usually logged and analyzed as well, the analysis on the process data is still very limited. Aggregated measures like time, number of clicks are often used to profile problem solving procedures. Few studies identified simple problem solving strategies from the log files (Zhang et al. 2014; Greiff, Niepel, Scherer, & Martin 2016).

Besides individual problem solving assessment, researchers started to emphasize collaborative problem solving assessment in the recent years. Collaboration is a long-standing practice in many environments, and people often needs to collaborate to solving real-life problems (Wilson, Gochyyev, & Scalise 2017). Because collaborative problem solving has to happen with at least two participants, a participants collaborative performance is highly affected by the collaborators. The assessment of the skill faces reliability issue. Some researchers solved the issue by creating simulated agents, also called as avatars, which solve problems together with an individual (Rosen 2017). Because the behaviors of a simulated agent are well scripted in advance, only the individual can affect the collaboration. On the other hand, some researchers carefully created joint activities for multiple individuals, and managed to assess collaborative skills by analyzing their collaborative behaviors (Wilson, Gochyyev, & Scalise 2017).

Several different techniques of learning analytics have been used to analyze both individual and collaborative problem solving. In general, two types of approaches have been adopted in the analysis: (1) Aggregate problem solving behaviors into some indicators, and explore the correspondence between the aggregated indicators and problem solving outputs. Correlation analysis, supervised learning algorithms such as decision tree and neural network are used in this approach. (2) Directly analyze problem solving behaviors without aggregation. Algorithms such as hidden markov model, lag sequential analysis, association rules mining are used in this approach.  Despite of the adopted analysis approaches, the problem solving actions in the log files have to be reasonable coded before feeding to the algorithms. The coding process can be treated as a kind of data pre-processing in typical data mining. Reasonable here means that the coded behaviors should be neither too specific nor too general. For example, a coded behavior is too specific if it records the specific pixel where an action occurs; a coded behavior is too general if it only records the existence of an action. Finite state machine is often used to auto code problem solving behaviors at an appropriate level of abstraction. The learning theory aligned with the problem solving assessment should guide the designs of behavior coding schemas. In this context, learning analytics can be seen as the method of transforming learning theory of problem solving into analysis results. Therefore, it is important to explore at the intersection of problem solving assessment and learning analytics.    

Note that the design of the user interface for an assessment actually decides what problem solving unit actions go to the log files. Apparently, it is impossible to acquire how a student solves a problem if the student is only required to fill up the final answer of the problem. Theories from learning science and cognitive science should drive the design to ensure appropriate problem solving unit actions are recorded for the purpose of assessment. For example, the given of a problem is intended to be hidden after a series of interactions when knowledge acquiring skill needs to be assessed. The problem relevant documents are mixed with irrelevant documents in a virtual library if the skill of information identification needs to be assessed (Zhang, Yu, Li, & Wang 2017).

In summary, researchers in learning science and data analytics have to work together to develop high quality of problem solving assessment. The proposed workshop aims for inviting researchers who have interests in facilitating problem solving assessment with learning analytics from both of the two areas. The organizers will invite researchers who have previously conducted related studies to present their findings and lessons learned. Then all the workshop participants are invited to discuss together to learn from each other and explore any collaboration opportunities.

References

Greiff, S., Niepel, C., Scherer, R., & Martin, R. (2016). Understanding students' performance in a computer-based assessment of complex problem solving: An analysis of behavioral data from computer-generated log files. Computers in Human Behavior, 61, 36-46

Greiff, S., Wüstenberg, S., Csapó, B., Demetriou, A., Hautamäki, J., Graesser, A. C.,... Martin, R. (2014). Domain-general problem solving skills and education in the 21st century. Educational Research Review, 13, 74-83

Griffin, P., McGaw, B., & Care, E. (2012). Assessment and teaching of 21st century skills: Springer.

Rosen, Y. (2017). Assessing Students in HumantoAgent Settings to Inform Collaborative ProblemSolving Learning. Journal of Educational Measurement, 54(1), 36-53

Schweizer, F., Wüstenberg, S., & Greiff, S. (2013). Validity of the MicroDYN approach: Complex problem solving predicts school grades beyond working memory capacity. Learning & Individual Differences, 24(2), 42-52

Wilson, M., Gochyyev, P., & Scalise, K. (2017). Modeling data from collaborative Assessments: Learning in digital interactive social networks. Journal of Educational Measurement, 54(1), 85-102

Zhang, L., VanLehn, K., Girard, S., Burleson, W., Chavez-Echeagaray, M. E., Gonzalez-Sanchez, J.,... Hidalgo-Pontet, Y. (2014). Evaluation of a meta-tutor for constructing models of dynamic systems. Computers & Education, 75, 196-217

Zhang, L., Yu, S., Li, B., & Wang, J. (2017). Can Students Identify the Relevant Information to Solve a Problem? Journal of Educational Technology & Society, 20(4), 288-299

 

Schedule

This half day workshop will take a participatory approach, blending presentation, interactive event and roundtable discussion. The tentative schedule is:

  • Introduction (20 mins): The organizers introduce the background of problem solving assessments and the current stage of how the assessments are facilitated by learning analytics.

  • Presentations: Presentations are from accepted papers and invited speakers on learning analytics empowered problem solving assessment.

  • Interactive events: Interactive events allow participants to fully communicate with each other. Contributors will show their interactive demonstrations and let participants have the experience how problem solving assessments can be done with the help of learning analytics.      

  • Round-table discussion: All the participants get together to discuss the key issues that have been observed.   

 

 

Who can participate

Participation will be "open" (i.e., any interested delegate may register to attend). This workshop will be of interest to a wide range of LAK delegates including students and researchers actively engaged in individual and collaborative problem solving assessment; educators in schools, and policymakers. Participants are encouraged to bring their own devices (laptops best or tablets with keyboards) with a modern web browser (e.g. Chrome, Firefox, Safari, Microsoft Edge or IE10+).

 

Paper submission

The workshop is inviting researchers to come to present their works. If you are willing to share your study in the workshop, please submit a paper with 4 pages in maximum, using Companion Proceedings Template, to lishan@bnu.edu.cn with the subject titled "LAK 19 workshop".

Each submission should cover some of the bullets below:

  • Describe the problem solving skills they intend to assess, and the theoretical framework from the perspective of learning science or cognitive science.
  • Explain the design of the assessment and expected behaviors of the test takers.
  • Discuss any case studies or experiments of problem solving assessment that have been conducted.
  • Explain the techniques used for analyzing problem solving behaviors, including but not limited to neural network, dynamics Bayes network, Markov modeling, and finite state machine.
  • Explain how learning scientist and data analysts can collaborate to improve problem solving assessment or understand problem solving procedures.
  • How curriculum content may integrate with problem solving assessment to motivate school teachers.

 

Important Dates

  • 10 December 2018 Workshop papers submission deadline
  • 4 January 2019 Notifications sent out (prior to early-bird registration deadline of 8 January 2019)

 

Workshop organisers

Dr. Lishan Zhang is Research Scientist in Advanced Innovation Center for Future Education, where he leads the project of problem-solving assessment for K12 students, and a Postdoctoral Fellow at Beijing Normal University. He received a Ph.D. in Computer Science from Arizona State University. He has published over 15 peer -reviewed academic papers. His research interests include intelligent tutoring systems, student modelling for personalized learning and educational data mining. He is also the PI for a National Science Foundation of China research project that studies how problem-based learning can be supported by technology. 

Dr. Baoping Li is a Lecture of Faculty of Education, Beijing Normal University. Her research interests include smart learning environment design and assessment, teacher and students’ perception on learning environments, integrating ICTs into teaching and learning activities, and technology enhanced science education.

Dr. Yigal Rosen is a Senior Director of Learning Solutions at ACTNext/ACT and a Project Director of PISA 2021 Creative Thinking Assessment. He and his group provide thought leadership on learning sciences, adaptive learning, and innovative assessment design in support of ACT's transition to a learning company. In addition to his role at ACTNext/ACT, Yigal is teaching design of technology-enhanced assessments at Harvard Graduate School of Education. Previously, Yigal led the Vice Provost for Advances in Learning Research Group at Harvard University. He and his group transformed digital learning in higher education at Harvard and beyond through adaptive learning, advanced data analytics, and innovative assessments. Prior to joining Harvard he was a Senior Research Scientist at Pearson, leading next generation research and development in PISA, NAEP, PARCC and 21st century skills programs.

Kristin Stoeffler, Senior Learning Solutions Designer at ACTNext/ACT, is the architect of the Cross-Cutting Capabilities components of the ACT Holistic Framework. Her work is focused on the design and development of the 21st century skill constructs (Collaborative Problem Solving, Technology Skills, Information & Communication Technology, Learning Skills, Creativity, etc.) within the ecosystem of ACT constructs, assessment products, and learning tools. Ms. Stoeffler is the lead assessment designer for PISA 2021 Innovative Domain – Creative Thinking and one of the key contributors to the validation efforts including cognitive labs and validation studies. Her work also includes development and prototyping of assessment games and leveraging new technologies for measurement of collaboration and scientific inquiry skills.

Dr. Shengquan Yu is a Professor at Beijing Normal University. He received a Ph.D. in Educational Technology from Beijing Normal University. His research fields include mobile and ubiquitous learning, ICT and curriculum integration, network learning technology, and education informatization policy. He has published about 100 peer reviewed academic papers, four popular science books and three scholarly monographs.