Learning by Design™: Iterations of Design Challenges
for Better Learning of Science Skills
Janet L. Kolodner
College of Computing
Georgia Institute of Technology
Atlanta, GA 30332-0280
jlk@cc.gatech.edu
1. Introduction
In Learning by Design™ projects, students iteratively reflect upon their hands-on experiences to achieve design challenges as scientists and engineers do. In addition to learning science contents, they learn how to design fair tests, justify test results with evidence, and explain their findings scientifically. In this paper, we explain how and why our project separates itself from other hands-on projects, where learning tends to be more superficial, and propose that the iteration of on collaborative design challenges plays a key role for thein learning both content and scientific skills learning.
Learning by Design (Hmelo et al., 2000; Kolodner et al., 1998, 2002a, 2002b2003a, 2003b) is a project-based inquiry approach to science education (Blumenfeld, et al., 1991) for middle school (ages 12 to 14; grades 6 to 8) where students learn science content and skills in the context of achieving design challenges. For example, to learn about forces and motion, they design a miniature vehicle and its propulsion system that can navigate several hills on its own power. To learn about erosion, its sources, and ways of managing it, students design an erosion management solution for a basketball court to be built at the bottom of a hill. These challenges are interesting, but too hard for one person to achieve alone or too hardand even for groups to achieve at the first trial. The challenges also require certain scientific knowledge, which becomes a source for furtherprovides motivation for investigation. The Each challenges can be achieved in various a variety of ways, providing students with natural desire to know each otherユs solutions, tangible feedback to monitor their progress, and rich potentials for iterations of collaboration. Through the iteration, students are expected to learn, not just the science content, but also how to do science themselves as well.
Our design of LBDs classroom sequencing and ritualized activity structures is based on advice offered by Case-Based Reasoning (CBR) (Kolodner, 1993, 1997; Schank, 1982, 1999), an approach to building intelligent computer programs that can reason based on their experiences. CBR was inspired by human reasoning. The many software prototypes built by the CBR community helped us to understand two things how to build more intelligent computer systems and the reasoning needed to learn effectively from experience and reuse lessons learned in new situations. The ability to learn for transfer is a chief aim in education; hence, we believed that applying lessons learned in our CBR research to science education could result in more deep and lasting learning, both of science content and of the skills and practices engaged in while learning. CBRs advice about learning is consistent with the literature on learning from transfer (see, e.g., Bransford et al., 1999) and adds to it specificity about processes involved in learning. Learning by Design is consistent with the advice offered by CBR and the advice offered by the learning for transfer literature. The LBD experience provides an example of how what we know about human learning can be used to design learning environments that promote deep and lasting learning of science content and practices.
Learning by Design™ has been piloted and field-tested with over 3000 students and two dozens teachers in the Atlanta area, covering a full spectrum of student backgrounds and capabilities. We have developed a years worth of LBD units spread over physical and earth sciences. Our physical science units, which cover forces and motion and mechanical advantage, focus on fair testing in the context of designing and running experiments and designing working artifacts. Our earth science units, covering a wide variety of topics in geology, focus on investigation through modeling and interpreting real-world cases. Learning by Design has been piloted and field-tested with over 3000 students and two dozen teachers in the Atlanta area, covering a full spectrum of student backgrounds and capabilities. These are an example of how what we know about human learning can be used to design learning environments that promote deep and lasting learning of the science practices.
In this paper, we specifically focus on how this curriculum helps students develop science skills. We first schematize the activity sequence in LBD, and then explain it in the context of a concrete curriculum challenge (the Balloon Car Challenge in the Vehicles in Motion unit), showing how the iteration of towards better design solutions challenge designs enhances students acquisition of scientific skills. Our assessments reveal that LBD students ability to engage in science and collaboration skills is quite a bit more advanced than that of their comparisons (Kolodner et al., 2002, 2002a2003a, 2003b). Under space constraints of this paper, we will not explain Case-Based Reasoning, our base learning model, nor the computer software for scaffolding relevant activities. Fuller discussion of these will can be found in Kolodner et al. (2002a).
2. The Design cycle
Figure 1 shows how activities in LBD are sequenced with each other. Notice that students engage in design challenges (left cycle in Figure 1) to motivate the need to investigate, and that investigation (right cycle in Figure 1) is engaged in when a need arises as they are designing. The solid arrows in the counter-clockwise direction show that sequencing is normally in that direction; the white arrows in the other direction show that sometimes there is cycling backwards and between several steps before going on.
Figure 1: LBDs Cycles
We have identified seven novel features in LBD (see the complete list in Kolodner et al., 2002, 2003aa), from which we focus on the following two for the purpose of this paper.
3. A sample walkthrough
A sample walkthrough with of the Balloon Car Challenge will illustrate the LBDs features. Notice in the walkthrough how iteration towards better solutions provides opportunities for iteration towards better understanding and how sharing experimental results, design ideas, and design experiences promotes focus on learning of scientific reasoning.
3.1 Context of Challenge
The Balloon Car Challenge comes two months into the school year and is the second sub-challenge of our Vehicles in Motion unit, where students learn about forces and motion in the context of designing a vehicle and its propulsion system that can navigate several hills and beyond. The Vehicles in Motion unit has five mini-challenges, the Coaster Car Challenge, the Balloon Car Challenge, the Rubber-band and Falling-weight Challenge, and Final Car & Antarctica Challenge. In the Balloon Car Challenge, the students are challenged to design and build a propulsion system from balloons and straws that can propel their coaster car as far as possible on flat ground. For each challenge, the following sequence iterates to provide the students with ample chances to learn science content and skills.
3.2 Understanding the Challenge; Messing About and Whiteboarding:
Addressing the challenge begins with gaining a better understanding of it (top of the Design/Redesign cycle), first through "messing about" in small groups and then "whiteboarding" as a class to identify ideas.
In the early part of the Vehicle Challenge, students have "messed about" with toy cars and noticed that some are better able than others to navigate hills and bumpy terrain, that some start easier than others and so on, generating a variety of questions about effects of forces on motion, e.g., "What effects how far a vehicle will go?" and "What kind of engine will get a vehicle started easily and keep it going?" They have also engaged in the first part of the Vehicles Challenge, the Coaster Car Challenge, where they design and construct a coaster car that can go as far and straight as possible after being let go at the top of a ramp. During the coaster car challenge, they focus on combining forces, on two particular forces (gravity and friction), and on the skills of explaining behavior of a device scientifically and making decisions informed by evidence.
Messing about is exploratory activity with materials or devices with mechanisms similar to what will be designed to quickly identify important issues that will need to be addressed. Here students mess about with a variety of balloon-and-straw engines. Some have larger balloons, some smaller; some have longer straws, some shorter; some have wider straws, some narrower. Having experienced messing about several times in prior challenges and becoming adept at it, the students are expected to know that they should quickly examine how each material work in several different ways.
After 20 minutes of messing about, the class gathers together for whiteboarding (Barrows, 1985). Whiteboarding is a whole-class discussion where students record their observations and identify facts they know, suggest ideas and hypotheses for addressing the challenge, and identify issues they need to learn more about. Having participated in whiteboarding several times earlier in the year, students eagerly volunteer what theyve observed (e.g., "We attached two straws to our balloon, and it went really far."), argue about what they saw and how to interpret it, (e.g., "Our car seemed to go farther with a shorter straw,モ " "I dont think we can compare across those cars because they didnt go exactly the same distance off the ramp. Well need to run fair tests to really know."), try to explain what they observed (e.g., "I think the wider straw makes it go farther because more air comes out of it, and that must mean more force."), and identify variables whose effects they want to know about conclusively (e.g., effects of length of straw, number of straws in a balloon, diameter of straw, extra engines, bigger balloons, amount of air in the balloon). During this public session, the teacher helps the class see what their peers have done, helps them to articulate questions and hypotheses, and helps them turn their initial questions into questions that can be answered through well-controlled experiments (e.g., what effect does the size of a balloon have on the distance the car will travel?). Following curiosity-promoting activities,, such as メmessing about,,モ with "whiteboarding" provides a public venue for working through asking good questions.
3.3 Investigate & Explore: ; Poster Sessions; Rules of Thumb
Next the teacher helps the class decide which of the issues are the most important to investigate. They might then have a discussion about good experimentation, reminding themselves of what theyve learned about experimentation during earlier activities. Indeed, tThhere may be a poster on the wall with "fair test rules of thumb" generated during previous activities and with such entries as "To insure a fair test, make sure to run procedures exactly the same way each time," and "To insure a fair comparison, keep everything the same except for one variable."
Each group of students now takes responsibility for investigating one of the questions and then designs and runs an experiment to find an answer. It is usually the end of a class period at this point, and for homework, the teacher assigns individuals the responsibility of designing an experiment that will assess the effects of the variable they are investigating. As they are designing their experiments, students use a "Design Diary" page that prompts them on what to pay attention to in designing and running an experiment and collecting data (Puntambekar & Kolodner, 1998) (Figure 3).
Figure 3: Design Diary Page: My Experiment
Students get together in small groups the next day, comparing and contrasting the experiments they have each designed as individuals and designing an experiment that takes the best of each. One student may have remembered that multiple trials are needed while another grappled with which variables to control. It is rare for a single student to consider everything needed for a fair test. The teacher also makes her way around the room, providing help as groups need it.
Students spend a day or two designing and running their experiments and collecting and analyzing their data, and at the beginning of the following day, each group prepares a poster to present to the class. They show their experimental design, data, and data interpretations, and they try to extract out advice for the class from their results. Each group presents to the class in a "poster session." Because students need each others investigative results to be successful balloon car designers, they listen intently and query each other about experimental design and procedures and gathering and interpretation of data (much as in a professional poster session). This provides an opportunity to discuss the ins and outs of designing and running experiments well. For example, in running balloon car experiments, groups often fail to make sure that they blew up their balloons exactly the same amount each time. The class might discuss ways of making sure that balloons are inflated the same way each time by counting breaths, by measuring diameter of the balloon, and so on. Often, even though students have already had some previous experience designing and running experiments, some groups results are not trustworthy yet, and they redo their experiments and the cycle of activities just described. Figures 4 and 5 show typical posters.
Figure 4: A Balloon Engine Investigation Length of Straw
Figure 5: A Balloon Engine Experiment Number of Straws
When the class agrees that results most groups have come up with are believable, the teacher helps students abstract over the full set of experiments and experimental results to notice abstractions and extract out "design rules of thumb," e.g., "By using double walled balloon engines, the car goes farther because a larger force is acting on the car." Experiments provide learners the opportunity to experience and record phenomena, but they don't necessarily understand why those phenomena are happening. To learn the explanations behind these phenomena, the teacher makes some relevant reading available about the science content involved, performs demonstrations that exemplify the science concept in another context, and reviews and discusses student generated examples of the science concept, usually from everyday life experiences. Here the issue is why a bigger force would cause the vehicle to go farther if the total amount of air coming out of the balloon is the same no matter how many balloons are used. The balloon car works best when a high velocity is achieved because most of the distance gained is during coasting (after the balloon engine has exhausted its air). If you are coasting to zero velocity, then the higher your velocity is when you start to coast, the greater the distance you will travel, provided that the floor has low and near-constant friction during the coasting. Thus, students need to obtain achieve high acceleration with their engines. To understand this, the class spends time discussing both Newton's Third law - equal and opposite forces - and Newton's Second Law - about how changes in force and mass can change motion - and use that science to explain how their balloon-powered vehicles behave. Later iterations to the rule of thumb produce more informed and complete statements, e.g., "By using double walled balloon engines, the car goes farther because a larger force is acting on the air inside, so then an equally large force from the air acts on the car". (Example of how Third Law gets woven into the context of the Balloon Car.)
3.4 Design Planning, Pin-Up Session:
Upon returning to the design/redesign cycle, the class briefly revisits the whiteboard and specifies the constraints and criteria of this challenge (understanding the challenge). Next, students plan their balloon car designs, using the combination of experimental results the class has produced, design rules of thumb extracted out, and scientific principles read about and discussed as a class. The teacher makes her way around the room making sure that design decisions the groups are making are based on being able to justify based on the experimental results theyve seen and what they know about forces and not simply on the loudness or popularity of a group member. (In general, students tend to design cars with two or three or even four balloon engines, often the balloons are doubled, and they tend to use wide straws or several narrow ones attached to each balloon).
Each group prepares another poster, this time presenting their design ideas along with the evidence that justifies each decision and their predictions about how it will perform. They present to their peers in a "pin-up session." During this activity, justifying decisions using evidence and making predictions are the primary foci, and after groups present to their peers and entertain their peers questions and suggestions, the class as a group discusses not only the ideas that everyone has presented, but also the practice of justifying, what it means to identify and use good evidence, and making predictions (Theyve had much experience with this while working in small groups).
Justifications during this pin-up session tend to refer to both the experiments that have been done and the principles about combining forces that were discussed earlier (e.g., "We decided to use two balloon engines because Group 3s experimental results showed that the more balloon engines, the more force will be exerted, and the farther the car will go. We didnt use more than two because we couldnt figure out how to blow up more than two balloons at a time. We also decided to double the balloons on each engine because Group 4s experiment showed that double balloons make the car go farther. We think this is because a double balloon exerts more force on the air inside the balloon, providing more force in the direction we want the car to go.") If justifications are truly are qualitatively better than during a previous pin-up session, the teacher might point that out to the class and ask them if they know why they were better and use the results of that discussion to update "justification rules of thumb" or the "justification working definition," e.g., "Justifications can refer to experimental results or to scientific principles; if they refer to both, they will convince people more than if they refer to just one."
3.5 Construct & test; Analyze & explain; Gallery First gallery walks:
Students now move to the construction and testing phase, modifying their designs based on what theyve discussed in class and heard from their peers, and then constructing and testing their first balloon-powered engines. They use another design diary page here, this time with prompts helping them to keep track of their predictions, the data they are collecting as they test, whether their predictions are met, and explanations of why not (Figure 96).
Figure 96: Design Diary Page: Testing My Design
None of their balloon cars work exactly as predicted, sometimes because of construction problems and sometimes because of incomplete understanding of scientific principles and the results of experiments. After working in small groups to try to explain their results, the class engages in a "gallery walk," with each groups presentation focused on what happened when their design was constructed and tested, why it worked the way it did, and what to do next so that it will perform better. The teacher helps students state their explanations scientifically and calls on others in the class to help as well. Hearing the explanations students are generating allows the teacher to identify misconceptions and gaps in student knowledge, and discussions about those conceptions and gaps might follow the gallery walk, along with readings or a short lecture or set of demos to help promote better understanding. This gallery walk is also followed by classroom discussion abstracting over the set of experiences presented, and revisiting and revising design rules of thumb, correcting them and doing a better job of connecting to them explanations of why rules of thumb work. Discussion also focuses on the explanations students made, and the class might generate the beginning of a working definition of "scientific explanation," e.g., "Explaining scientifically means providing reasons why something happened that use science terminology and include the science principles we learned about earlier."
3.6 Iterative Redesign:
Following the gallery walk and ensuing discussions, students make their way again around the Design/redesign cycle, revising their designs, based on explanations their peers have helped them develop and the new things theyve learned. They construct and test their new designs, each time using another "Testing My Design" Design Diary page, and each time participating in a gallery walk with the rest of the class. and tThey iterate in this way towards better and better solutions until they or the teacher decide they have gotten as far as they need to. Discussions after second and third gallery walks often focus on "fair testing," this time not in the context of an experiment but rather in the context of comparing results across different designs. Students will not be able to explain why a new design works better than an old one if they have changed more than one thing in their design since the earlier time. Nor will they be able to believe their own results if they dont follow the same testing procedures each time. Sometimes teams discover these issues as they are testing a later design and report them to the class; sometimes a peer notices that a test was done differently than last time or, if results are confusing, asks how a test was done and helps a group discover inconsistencies in their procedures. The "fair test rules of thumb" may be consulted and refined, as might the working definition of "scientific explanation."
3.7 Finishing up:
The entire activity takes ten to twelve class periods. At the end, the class holds a final gallery walk and a contest, and then they compare and contrast across designs to better understand the scientific principles they are learning, going back to the rules of thumb to revise and explain them better. They finish up, as well, by discussing their collaboration experience, their design process, their use of evidence, and so on, and revising the rules of thumb and working definitions about each. Following all of this group work, each student writes up and hands in a project report including a summary of the reasoning behind their groups final design and what theyve learned about collaboration, design, use of evidence, and so on.
4. Essence of Learning by Design™
Table 1 shows how the rituals of LBD iterates during different phases of the learning activities. As you can see, students are given ample opportunities to express their ideas and discuss among them, on which both the teacher and the students work to extract usable rules of thumb, abstracting and summarizing the science content as well as science skills such as fair testing, consistent procedure, and justification with evidence.
Pin-up & Poster |
Whiteboarding |
Gallery walk |
Reflecting on |
|
Understanding challenge |
** |
|||
Investigate & explore |
* |
* |
||
Design planning |
* |
* |
* |
|
Construct & test |
* |
* |
* |
|
Iterative design |
* |
* |
||
Finishing up |
* |
* |
If you go back to the details of activity sequence in Section 3, you will find several supports to scaffold this learning.
Ritualized classroom activities: The activity sequence includes a variety of "ritualized" classroom activities that are linked to science practices. Rituals provide a systematic way of carrying out some important skill set that systematizes practices to makes them methodical (promoting good habits) and engages students in public practice as collaborators (affording noticing, asking, discussion, and productive reflection). Rituals secure students iteration and collaboration.
Orchestration: In both the Design and Investigate cycles, LBD divides up responsibility for investigations among the different groups in a class (as in the jigsaw approach). Because all groups needed the results of every other group investigations, students listen to the presentations of others with great attention they want to apply whats been learned by other groups. They realize that they can only make use of what other groups have found through investigations if investigations are carried out in a believable way, the situation of which makes them quite easymaking it natural to discuss the ways of carrying out investigative practices.
Teachers coaching: The teacher has a variety of roles to play in the LBD classroom. The teacher grasps manages the timing, grasping opportunities to introduce scientific vocabularies vocabulary, and formulation ( of rules of thumb) , and so on, helping to make students connect their project activities to targeted science content and helping them reflect appropriately on their experiences. reflect upon their own experiences. As you see in the shift from each public presentation to a whole-class discussion, teacher often guide students reflection. In addition, teacher helps it in other ways likeFor example, when students ask each other about how many times they repeated a procedure, the teacher can introduces them to the word "trial" and facilitates a discussion about how one knows if theyve done enough trials. When a student asks a group why results across trials are so different from each other, it becomes natural to talk about why results of trials might be different and , to put forth a rule of thumb about fair testing that says, "When results of trials are very different from each other, it might mean some important variable wasnt controlled for,.モ The teacher can and to introduce the notion of "replicability."
Design diary: As dDesign diary diaries pages are used by individuals and small groups while they are engaging in important design and investigation activities. In addition to prompting design of experiments and collection of data, as shown in the design diary pages above, students use design diary pages to help them restate the challenge they are working on, chart the justifications for their design decisions, and note interesting aspects of their peersユ experiences. These simple kind of paper-and-pencil scaffolding tools help appears in the activity sequence when students work on as provide reminders for individuals and small groups, it provides reminders about the things the group about what they should be doing and where to put their attention, allowing most groups to work relatively independently. Most groups still need some help, and the teacher needs to make her way around the classroom providing that help. But managing group work is made far easier for the teacher ム she can judge where her attnetion is needed and /or thinking about during any activity including メrules of thumb.モ The intention here is that if design diary pages can remind most groups what to focus on, then the teacher can give /or thinking about during any activity including メrules of thumb.モ The intention here is that if design diary pages can remind most groups what to focus on, then the teacher can provide individualized adviceattention to those groups that need attention, and all groups have what they need for success.
Whiteboarding: The whiteboard, with its columns for "facts and observations," "ideas and hypotheses," and "learning issues," provides a public external record of the class progress as they make their way through a challenge. Revisiting it and adding new ideas, facts, and learning issues while crossing out learning issues that have been covered and ideas that have been found to be poor ones provides an accounting of how far theyve come. Revisiting the whiteboard to organize the learning issues into groups provides a way of dividing a challenge into parts and having the class be able to keep track of why they are doing some piece of it. Some students have become so fond of the way the whiteboard helps them keep track of their ideas and learning needs that theyve taken the initiative to use whiteboards to organize their thoughts on science fair projects (Kolodner et al., 2002b2002).
4. 5. Results:
Weve evaluated learning among LBD students using a variety oftwo methods. (1) Weve tested students on physical science content knowledge and general science skill knowledge using a standardized written test format. It was administered in a pre-/post- implementation design in LBD™ classrooms and comparison classrooms. Items on the test were drawn from a number of normed and tested sources, including the National Assessment of Educational Progress (NAEP), released items from the Third International Mathematics and Science Study (TIMSS), the Force Concept Inventory and the Force and Motion Conceptual Evaluation, as well as items of our own. (2) We have designed the test items so that, by cross referencing subsets of the questions, we can pinpoint the conceptual model used by the students and see how it has evolved. Weve also been testing students in both LBD and comparison classrooms for changes in content knowledge, science skills, and the ability of students to solve problems in group settings using two performance-based assessment tasks, both adapted from the PALS database (SRI, 1999) with the addition of an initial design activity on each assessment. These assessments were extensively videotaped. We have developed a tool for analyzing the tapes for both collaboration issues and use of appropriate scientific knowledge to solve the challenge (Gray, Camp and Holbrook, 2001).
Our results are quite interesting. Comparing LBD students to matched comparisons, weve found in two separate years of data collection that LBD students do at least as well as comparison students in content mastery and sometimes often better (Kolodner et al., 2002, 2003a, 2003b). When we analyze the results from the various individual teachers, we find that the largest gains among our LBD students were in those classes that are the most socio-economically disadvantaged and who tested lowest on the pre-test. Interestingly, these classes also had teachers with less background in physical science. We also see a trend towards re-engaging girls in science. Preliminary scoring on our 1998-99 data shows that while girls score lower than boys, as a rule, on pre-tests, they are equal with boys or ahead of them on post-tests. While we would have liked to have seen LBD students master the content far better than non-LBD students, and we believe they have, we dont yet have the data to show that, as the kinds of questions asked on short answer and multi-choice tests often dont provide good ways of distinguishing different levels of content mastery.
We do have quite interesting data, however, on the learning of skills, a major focus in LBD. Comparing LBD students to matched comparisons on performance tasks, our data show that LBD students consistently perform significantly better than non-LBD students at collaboration skills and meta-cognitive skills e.g., those involved in checking work and that they almost always perform significantly better than matched comparisons on science skills (those involved in designing fair tests, justifying with evidence, and explaining). Non-LBD™ students treat the tasks we give them as simply writing something down, never really addressing them as a challenge needing distributed effort. LBD students, on the other hand, negotiate a solution and see the tasks as requiring an experimental design. Indeed, in the years when weve collected performance data early in the year (after the an introductory launcher unit for LBD students) and later in the year (in December or January, after the Vehicles unit for LBD students, and at the end of the school year for comparisons), we not only saw that LBD students performed better than comparison students, but we saw that while early in the year, they LBd students performed better than comparisons in just a few categories (generally collaboration and self-checks), later in the year, they performed better in a much larger range (sometimes the whole set). Most interesting, perhaps, is that when we compare across mixed-achievement LBD students and honors non-LBD students, we found that mixed-achievement LBD students performed as well or better thanwere indistinguishable from non-LBD honors students on skills, meaning that LBD brings normal-achieving students to a level of capability usually found only among gifted or honors students. Table 2 shows the data that supports these results.
Table 2: Results of Performance Assessments for 1999-2000 and 2000-2001: Means and Standard Deviations for Comparison and Learning by Design Students after the unit
Coding Categories |
1999-2000 Typical Compar-ison |
1999-2000 Typical LBD |
2000-2001 Typical Compar-ison |
2000-2001 Typical LBD |
1999-2000 Honors Compar-ison |
1999-2000 Honors LBD |
2000-2001 Honors LBD |
Self-checks |
1.50 (.58) |
3.00 (.82)** t (6) = 3.00 |
1.30 (.67) |
3.88 (1.03)* t (7) = 5.548 |
2.33 (.58) |
4.25 (.50)*** t (5) = 4.715 |
5.00 (.00)*** t ((3) = 6.197 |
Science Practice |
2.25 (.50) |
2.75 (.96) |
1.40 (.89) |
3.75 (1.32)* t (7) = 3.188 |
2.67 (.71) |
4.75 (.50)*** t (4) = 4.648 |
4.75 (.35)** t (3) = 4.443 |
Distributed Efforts |
2.25 (.50) |
3.25 (.50)* t (6) = 2.828 |
1.70 (.84 |
3.00 (.00)* t (7) = 3.064 |
3.00 (1.00) |
4.00 (1.15) |
4.25 (.35) |
Negotia-tions |
1.50 (.58) |
2.50 (1.00) |
1.40 (.65) |
2.88 (1.03)* t (7) =2.631 |
2.67 (.58) |
4.50 (.58)*** t (5) = 4.158 |
4.00 (.00)* t (3) = 3.098 |
Prior Knowledge adequate |
1.50 (.58) |
2.75 (.96) |
1.60 (.89) |
3.88 (.75)* t (7) = 4.059 |
2.67 (1.15) |
3.50 (1.00) |
4.25 (.35) |
Prior Knowledge |
1.75 (.50) |
2.25 (.50) |
1.60 (.89) |
3.75 (.87)* t (7) = 3.632 |
3.0 (.00) |
3.75 (1.50) |
3.75 (.35) |
Science Terms |
1.75 (.50) |
2.75 (.96) |
1.50 (.87) |
2.88 (.63)* t (7) =2.650 |
2.67 (.71) |
3.50 (1.00) |
4.00 (.00) |
* = p < .03; ** = p < .02; ***= p < .01
N= groups where most groups consisted of 4 students each.
(Means are based on a likert scale of 1 - 5, with 5 being the highest rating)
Reliability for the coding scheme ranged from 82-100 percent agreement when two coders independently rated the tapes. For this set of data, a random sample of four - five tapes were coded for each teacher from one class period. Approximately 60 group sessions are represented in this table, representing 240 students.
LBD provides motivating activities that keep students attention. It helps them reflect on their experiences in ways that promote abstraction from experience, explanation of results, and understanding of conditions of applicability. Repeated use of concepts; repeated practice of skills; and experience with those skills and concepts over a variety of circumstances seems to be important. LBD gives students reason to reflect on their experiences (they have to explain to others; others need their results; they want to improve their own results). And LBDs sequencing provides many different ways of helping students be successful at reflecting on their experiences productively, remembering, judging applicability, applying, and explaining. We are looking forward to exploring in more detail the trajectories of learning in individual students in LBD classrooms and discovering best ways of carrying out LBDs practices, in order to achieve a variety of goals gaining a better understanding of learning in real-world situations orchestrated for learning for transfer, refining LBD units and ritualized activity structures, and providing guidelines for promoting deep and lasting learning in project-based inquiry classes that span disciplines and age groups.
7. Acknowledgements
The "we" in this paper includes many wise people who have helped create, evaluate, and extract lessons from Learning by Design™, including Jackie Gray, Jennifer Holbrook, David Crismond, Paul Camp, Barbara Fasse, Mike Ryan, Lisa Prince, and the many teachers weve worked with. Our funding for this work came from the National Science Foundation, the Woodruff Foundation, and Georgia Techs College of Computing. Thanks to Naomi Miyake for making sure this paper got written.
8. References
Barrows, H. S. (1985). How to design a problem-based curriculum for the preclinical years. NY: Springer.
Blumenfeld, P.C., E. Soloway, R.W. Marx, J.S. Krajcik, M. Guzdial, and A. Palincsar (1991). Motivating project-based learning: Sustaining the doing, supporting the learning. Educational Psychologist, 26(3 & 4). pp. 369-398.
Bransford, J. D., Brown, A.L. & Cocking, R. R. (Eds.) (1999). Learning and Transfer (Chap. 2). In How People Learn: Brain, Mind, Experience, and School. Washington, DC: National Academy Press.
Gray, J., Paul J. Camp and Jennifer Holbrook (2001). Science talk as a way to assess student transfer and learning: Implications for formative assessment, in preparation.
Hmelo, C.E., Holton, D.L., Kolodner, J.L. (2000). Designing to Learn about Complex Systems. Journal of the Learning Sciences, Vol. 9, No. 3.
Kolodner, J.L. (1993). Case-Based Reasoning. San Mateo, CA: Morgan Kaufmann.
Kolodner, J.L. (1997). Educational Implications of Analogy: A View from Case-Based Reasoning. American Psychologist, Vol. 52, No. 1, pp. 57-66.
Kolodner, J. L., David Crismond, Jackie Gray, Jennifer Holbrook, Sadhana Puntambekar (1998). Learning by Design from Theory to Practice. Proceedings of ICLS 98. Charlottesville, VA: AACE, pp. 16-22.
Kolodner, J.L., Camp, P.J., Crismond, D., Fasse, B., Gray, J., Holbrook, J., & Ryan, M. (2003a, in press). Promoting Deep Science Learning Through Case-Based Reasoning: Rituals and Practices in Learning by Design Classrooms. In Seel, N.M. (Ed.), Instructional Design: International Perspectives, Lawrence Erlbaum Associates: Mahwah, NJ.
Kolodner, J. L., Crismond, D., Fasse, B. B., Gray, J.T., Holbrook, J., Ryan, M. Puntambekar, S. (2003b, in press). Problem-Based Learning Meets Case-Based Reasoning in the Middle-School Science Classroom: Putting a Learning-by-Design Curriculum into Practice. Journal of the Learning Sciences. Vol. 12.
Kolodner , J. L., Gray, J. T. & Fasse, B. B. (2002, in press). Promoting Transfer through Case-Based Reasoning: Rituals and Practices in Learning by Design™ Classrooms. Cognitive Science Quarterly, Vol. 1.
Puntambekar, S. and Janet Kolodner (1998). The Design Diary: A Tool to Support Students in Learning Science by Design. Proceedings of the International Conference of the Learning Sciences (ICLS 98). Charlottesville, VA: AACE, pp. 35-41.
Schank, R. C. (1982). Dynamic Memory. Cambridge University Press: New York.
Schank, R. C. (1999). Dynamic Memory Revisited. Cambridge University Press: New York.
SRI International, Center for Technology in Learning. (1999). Performance Assessment Links in Science Website. http://www.ctl.sri.com/pals.