An interesting twist on the Khan Academy approach. Why not have students create their own video uploads teaching core content? Check out Lincoln MS, in Santa Monica, where kids are creating their own math teaching videos.
As teacher Eric Marcos puts it, "the best way to learn something is to teach it. I’ve heard kids say that when they were trying to explain how to divide fractions, they knew to flip the number over but they didn’t know why.” Because they were creating a tutorial video, “They found out that they didn’t know why” – and then, naturally, they found out why."
Awesome.
Intersecting Education
Cross-sector ideas for education reform and innovation
Monday, August 15, 2011
Sunday, August 7, 2011
Customized schooling-- only something for the super rich and the very poor?
I came across this NYT article a month ago, and have been meaning to post about it since then.The community of Millburn, NJ is is a twist right now over the issue of charter schools due to the opening of a new mandarin-immersion school that will draw students from a number of surrounding districts. There's vocal opposition, the sentiment of which is summed by this quote from Matthew Stewart, a resident who thinks charter schools should only be allowed to operate in underperforming systems:
“Public education is basically a social contract — we all pool our money, so I don’t think I should be able to custom-design it to my needs,” he said, noting that he pays $15,000** a year in property taxes. “With these charter schools, people are trying to say, ‘I want a custom-tailored education for my children, and I want you, as my neighbor, to pay for it.’ ”
The assumption I struggle with in this statement is that entering into a social contract means being willing to ignore public inefficiency and problems. The point here is that the district wasn't meeting parent needs, and wasn't flexible enough to change on its own. Even if not underperfoming on the most normative measures, the district was underperforming as a public service in other ways. So the public utilized a PUBLIC OPTION (charter creation) to provide an option to a group of kids and parents who needed it.
What's so wrong with customized education? And why should we only offer it to kids in blatantly failing systems or to those who can afford to opt out of the public option entirely?
**Certain folks like to throw around tax figures to show how much they're paying for other people's kids... $15,000 goes to support many municipal services, of which schooling is one. I wonder how many kids Mr. Stewart sends to the schools?
“Public education is basically a social contract — we all pool our money, so I don’t think I should be able to custom-design it to my needs,” he said, noting that he pays $15,000** a year in property taxes. “With these charter schools, people are trying to say, ‘I want a custom-tailored education for my children, and I want you, as my neighbor, to pay for it.’ ”
The assumption I struggle with in this statement is that entering into a social contract means being willing to ignore public inefficiency and problems. The point here is that the district wasn't meeting parent needs, and wasn't flexible enough to change on its own. Even if not underperfoming on the most normative measures, the district was underperforming as a public service in other ways. So the public utilized a PUBLIC OPTION (charter creation) to provide an option to a group of kids and parents who needed it.
What's so wrong with customized education? And why should we only offer it to kids in blatantly failing systems or to those who can afford to opt out of the public option entirely?
**Certain folks like to throw around tax figures to show how much they're paying for other people's kids... $15,000 goes to support many municipal services, of which schooling is one. I wonder how many kids Mr. Stewart sends to the schools?
Thursday, July 28, 2011
Linking it together- Blog piece on Policy by Algorithm
Interesting blog post by Jeff Henig on the Straight Up blog today about how algorithms play into decision-making in education (with comparisons to Google and healthcare).
"A signature element of many examples of contemporary policy by algorithm, moreover, is their relative indifference to the specific processes that link interventions to outcomes; there is much we do not know about how and how much individual teachers contribute to their students' long-term development, but legislators convince themselves that ignorance does not matter."
The one thing I do find a little odd about this post is that Henig aptly describes/makes the case for the problem, chides legislators for falling into the trap of clean solutions, but then doesn't really give us a then what. It's pretty easy to lay out the dilemma here, and then to spout a few words about the need for teamwork and professionalism. But how do we do it?
"A signature element of many examples of contemporary policy by algorithm, moreover, is their relative indifference to the specific processes that link interventions to outcomes; there is much we do not know about how and how much individual teachers contribute to their students' long-term development, but legislators convince themselves that ignorance does not matter."
The one thing I do find a little odd about this post is that Henig aptly describes/makes the case for the problem, chides legislators for falling into the trap of clean solutions, but then doesn't really give us a then what. It's pretty easy to lay out the dilemma here, and then to spout a few words about the need for teamwork and professionalism. But how do we do it?
Thursday, July 14, 2011
SERI scores are out. Not surprisingly, the gap between different states is huge...
I posted yesterday about my takeaways regarding the gender of the three Google global science fair winners, but didn't note the surprising fact that the three winners were all from the US. Sure, there are a lot of reasons other than sheer brilliance or rigor of the entries for this being the case (*very* small sample size, language barriers, cultural differences in style, resources for dazzling the judges, etc.), but it's worth noting that this news comes following the release of the national SERI—Science and Engineering Readiness Index-- results, which measure how high school students are performing in physics and calculus.
The news isn’t spectacular. There’s huge variability amongst states. As the press release noted “Massachusetts easily bested all other states with a score of 4.82, while Mississippi came in at 1.11. Twenty-one states in total, including California, earned below or far below average scores, while only 10 states earned scores above the national average.” (Essentially, the relatively higher performance of a few states drags up the average to the point that most states don’t even meet it. The distribution curve is positively skewed.)
In a time where many are wringing their hands about our country’s future global competitiveness, it’s clear we’ve got a lot of work to do. SERI sets a pretty high bar given where we are as a nation; it’s focused on the so-called “hard” and physical sciences rather than biological or health-related ones, and is compiled based on Advanced Placement scores, NAEP (National Assessment of Educational Progress) reports, enrollment data, and teacher certification/qualification requirements. Yet, the picture painted with SERI isn’t as bad as the one we’d see if we did an international comparison against tests like the PISA (where the US ranks 30th on “Maths” and 23rd in the sciences), or the TIMSS (11th).
The bright point on all of this is that we've got some statewide comparisons that account for more than just test scores. Hopefully we can use these results to push the national conversation towards a higher bar for everyone.
Wednesday, July 13, 2011
Google science fair: Girl power? Or the typical science-gender divide replicated?
A lot of news outlets are making a pretty big deal about the fact that the three winners of Google’s first-ever global science fair are girls. As the LA Times reported, Google touted "girl power" in its own press release, and Fast Company made a point of noting that, “the trio of girl champions narrowly beat out boys of equal mental prowess.” Tori Bosch, of the XX Factor on Slate gushed, "I can’t help feeling a little sisterly glee at the fact that the winners were all girls...They earned their sweet Lego trophies with their thoughtful approach to science, but their gender is getting them more attention today. Someday, perhaps three girls rocking a science fair won’t be news, but for now, it is."
While women have been on par with, if not outpacing, men in many academic areas (getting college and graduate degrees, for one), we’ve yet to come even close to matching the aggregate numbers in math, engineering, and science. This under-representation of women is a well-covered, and now well-funded, issue, so it’s not all that surprising that the trio is getting featured in the popular media.
The fact that three girls took home top awards is great; hopefully these three winners will set the stage for more young girls to see science as a viable, interesting, and worthwhile pursuit. Regardless of gender, the projects the three brilliant young women presented and defended are incredibly impressive. However, I do wish we'd take a more nuanced look at the results of the Google competition as it relates to the gender gap.
If we pause for a moment and examine the Google results more closely, it's hard to see this news as heralding a major moment for women in science. According to the NSF, as of 2008, women were already holding a majority of degrees in medical and biological sciences; women are similarly well-represented in occupations like dietitians, pharmacists, biologists, etc. It is in these areas that all of the female winners focused. When we talk about fields that are typically under-represented, we’re generally most concerned with engineering and physical scientists, where women only account for about a quarter of degrees and where females are much less represented in the field. For example, only 10 percent of engineers were women. (You can do your own analysis on the NSF’s wicked cool data site and digest.)
Looking at the Google finalists, the pattern is evident. Of the name’s I could identify as belonging to one gender versus another (12 of the 15-- admittedly, this is not a scientific analysis here…), five out of six girls submitted projects relating to health, biology, or psychology (the sixth investigated sags in power lines… super cool, no?). Of the boys, all of the six I could identify went for computer science, engineering, or math. In other words, the traditional gender divide was as present as ever. The Google results definitely can't be seen as leading indicators of a change in field composition at large.
So should we be cheering? Dubious or vaguely concerned? I’m not sure. But it does seem that we’ve got to become more field-specific here if we want to change the demography of the science fields or laud attempts to do so. My three takeaways are:
- Yeah, Google! Corporations can do pretty awesome things.
- It’s great to see these three girls kicking some butt and taking some names, let’s hope they stick with it.
- I will avoid soy-based marinades when grilling.
Sunday, July 10, 2011
It's not the technology, it's the user. (Duh.)
Edweek published an article on the use of interactive whiteboards in the classroom, and the big takeaway was that the skill of the teacher matters more than the quality of technology used. Um... yeah. Big surprise guys.
It always amazes me that people expect technology to play a role in the classroom that it doesn't in other areas of life. Think about it, when you got email, did the program making you a better communicator? Probably not. Did you naturally understand how to manage it, or did it take work to? You get out of a given technology what you learn to put into it.
But in education, we seem to think that investments in hardware and software will automatically lead to better learning. We've got to find ways to not only create great technologies, but also to teach professionals how to integrate those great technologies into their practice. Further, we've got to start making the effective, innovative use of technology a standard for excellent teaching.
As Patrick Ledesma of Fairfax County notes in the article, “an IWB is just a tool, and if it’s not used correctly, you can’t blame the tool, you have to blame the user. [...] If you’re a teacher who used to lecture at a chalkboard, you’ll do the same with the IWB.”
It always amazes me that people expect technology to play a role in the classroom that it doesn't in other areas of life. Think about it, when you got email, did the program making you a better communicator? Probably not. Did you naturally understand how to manage it, or did it take work to? You get out of a given technology what you learn to put into it.
But in education, we seem to think that investments in hardware and software will automatically lead to better learning. We've got to find ways to not only create great technologies, but also to teach professionals how to integrate those great technologies into their practice. Further, we've got to start making the effective, innovative use of technology a standard for excellent teaching.
As Patrick Ledesma of Fairfax County notes in the article, “an IWB is just a tool, and if it’s not used correctly, you can’t blame the tool, you have to blame the user. [...] If you’re a teacher who used to lecture at a chalkboard, you’ll do the same with the IWB.”
Bringing back the student voice
Memphis recently approved a new teacher evaluation system that incorporates, among other factors like test scores and principal observations, student ratings of effectiveness. As we move towards systems that put so much weight on student performance, why shouldn't we factor in student feedback on their day-to-day classroom experience?
Emerging research would suggest we should. A NYT article on the Gates Teaching Effectiveness study reported that "teachers whose students described them as skillful at maintaining classroom order, at focusing their instruction and at helping their charges learn from their mistakes are often the same teachers whose students learn the most in the course of a year, as measured by gains on standardized test scores." These findings were based on a survey instrument developed by Ron Ferguson, here at Harvard GSE and indicate that students know more about the quality of their educations than we often give them credit for. See the MET briefing for the full details.
Surely, students shouldn't be the only arbiters of value here. We should measure in-classroom practice and growth through observations and we shouldn't throw out the tests. Student ratings won't always be fair in all cases and work should be done to make sure the way we ask questions links up to what we need to know (there has been some questions raised about independent sites like ratemyteacher.com, where ratings are obtusely broad and superficial... see Ferguson's tripod project here for what the current assessment looks like), but they should be a factor in helping us triangulate performance. If we all think back to our own experiences, we knew when we were being pandered to and we knew when we were being pushed and supported.
Memphis is actually one of the partner districts for the Gates project, so I guess it isn't too much of a surprise that they're trying some new approaches. Future iterations of the Memphis model may also find ways to bring in parent ratings too, which would be great.
I'm excited to see other districts try this. Let's bring the voice of the end-user into our assessments of success.
Emerging research would suggest we should. A NYT article on the Gates Teaching Effectiveness study reported that "teachers whose students described them as skillful at maintaining classroom order, at focusing their instruction and at helping their charges learn from their mistakes are often the same teachers whose students learn the most in the course of a year, as measured by gains on standardized test scores." These findings were based on a survey instrument developed by Ron Ferguson, here at Harvard GSE and indicate that students know more about the quality of their educations than we often give them credit for. See the MET briefing for the full details.
Surely, students shouldn't be the only arbiters of value here. We should measure in-classroom practice and growth through observations and we shouldn't throw out the tests. Student ratings won't always be fair in all cases and work should be done to make sure the way we ask questions links up to what we need to know (there has been some questions raised about independent sites like ratemyteacher.com, where ratings are obtusely broad and superficial... see Ferguson's tripod project here for what the current assessment looks like), but they should be a factor in helping us triangulate performance. If we all think back to our own experiences, we knew when we were being pandered to and we knew when we were being pushed and supported.
Memphis is actually one of the partner districts for the Gates project, so I guess it isn't too much of a surprise that they're trying some new approaches. Future iterations of the Memphis model may also find ways to bring in parent ratings too, which would be great.
I'm excited to see other districts try this. Let's bring the voice of the end-user into our assessments of success.
Subscribe to:
Comments (Atom)
