Do you remember when you first learned to program? If you are like me, it came very easily. I remember sitting at home one weekend reading in fascination the ICL BASIC Pocket Edition, then rearranging my A-level physics practicals so that I could be there at the booked times and use the school's dial-up terminal to run my code on some distant mainframe. I learned Fortran from McCracken's A Guide to Fortran IV Programming, and APL from from an APL\1130 Primer that I begged from IBM. (Though it was a couple of years before I actually came in contact with a machine where I could use that arcane knowledge.) In my first year at university, before I officially started studying computing, I learned BCPL, Z80 assembler and Algol68.
Now for you, the precise list of technologies is probably different, but I bet the outline is the same. It came very easily. You mostly taught yourself. It was fascinating, addictive even. Like building a machine out of Mechano parts, all perfect. Like giving instructions to a magic genie, or writing the rules for a game that played itself. Surely anyone could do this and have just as much fun? But consider: could we be making the "false consensus" error? Could we be wrong when we imagine that what we find easy is typical?
The assumption that it's going to be easy underpins a lot of current efforts. For example, if you haven't seen it already, it's worth watching the promotional video for the code.org website. This is a very dense video — in just over 5 minutes it makes just about every case for learning to program. It opens with a lofty quote from Steve Jobs: "Everybody in this country should learn how to program a computer ... because it teaches you how to think." (Which, if we take it seriously, gets into dangerously political territory. See Not a Tool but a Philosophy of Knowledge.) The video then wheels out a sequence of substantial computer geeks, including Bill Gates and Mark Zuckerberg. They enthuse about how great it is to be able to program; how wonderful and empowering it was when they first learned. The video wraps up with a starkly utilitarian message, aimed at those who are still not convinced: people who can program get better jobs. They get ping-pong tables and free food. At work! The lesson is clear. For all these reasons, everyone should learn to program. But "should" depends on "can". Can everybody learn how to program a computer?
The evidence is not very encouraging. For example, in their 2006 paper The camel has two humps Saeed Dehnadi and Richard Bornat noted that:
"Despite the enormous changes which have taken place since electronic
computing was invented in the 1950s, some things remain stubbornly the
same. In particular, most people can't learn to program: between 30%
and 60% of every university computer science department's intake fail
the first programming course. Experienced teachers are weary but never
oblivious of this fact; bright-eyed beginners who believe that the old
ones must have been doing it wrong learn the truth from bitter
experience; and so it has been for almost two generations, ever since
the subject began in the 1960s."
This paper caused considerable controversy at the time. Not, however, because of its thesis that most people found programming too hard. Few people disagreed with that assertion, because everyone who has tried to teach programming has been confronted, in their direct experience, with the same evidence. The controversy concerned whether a test which the authors proposed could really do what they claimed, and sort the programming sheep from the non-programming goats. (Such "programming aptitude tests" have a poor track-record and their results usually correlate very weakly with subsequent direct measures of programming ability. The results from Dehnadi and Bornat's test were mixed, and perhaps all we can say for sure at the moment is that if the test subject has had any previous exposure to an attempt to teach programming,
However, there's a deeper issue here, which has so-far hardly been noticed. Even if we had a pre-course test for programming ability, where would that leave us? Sure, it would permit university computing departments to filter their prospective students and to reduce a lot of unnecessary suffering by turning away those who would find programming too hard. But if we really believe everyone should learn to program, if that's more than just a slogan, then this approach doesn't help at all, does it?
The challenge here is to work out techniques for teaching programming to those people who do not find that it comes easily. A test that showed ahead of time who was a "natural" and who wasn't would be helpful, but not to filter out and discard those who have difficulty. It would be more akin to a diagnosis of dyslexia. We don't say to a child "You have dyslexia. You will never be able to read." Instead, when we find that a child has difficulties with reading we put extra effort into helping them, and to a large extent we now know how to be successful. With dedication, it's possible to get literacy rates in excess of 98%. (Although governments seldom consider it worth trying that hard, it is possible.) Personally, I believe that if we wanted, the same could be true of programming.
But how? Surely, as Richard Bornat said to me last autumn, "We have tried everything." What can we do that's different? Now, I'm no "bright-eyed beginner" — I've been teaching programming classes for several years — but I think there are things we could try, but don't, because of who we are. Mostly, the people who teach programming are first of all expert programmers, not expert programming teachers, and they mostly aim their teaching at students who are fundamentally the same as themselves. The teachers are almost always the people for whom it came easily. The people who have difficulty — most people — need a very different approach, and not just an approach which is louder and slower. It is, in fact, our experience that is not typical.
Now, I'm sure you would like to see some concrete examples, and I'd like to give them, but I think this post is long enough already. In a future post I will certainly take the opportunity to talk about what's worked and what's failed for me, and to put some teaching materials online. I certainly don't have all the answers but maybe I can help us get closer.
I thought I should post a comment because my (prescription-drug induced) over-hyping of Saeed Dehnadi's research findings in
ReplyDeleteThe camel has two humps have caused a lot of confusion, and unfortunately obscured the importance of what he found.
I stand by the claim about 30%--60% failure rates (though perhaps 30%--50% would have been nearer the mark). All programming teachers have seen that sort of thing, though it's been little researched so it remains, unfortunately, anecdotal. What was over-hype was that Saeed had discovered the Holy Grail of aptitude tests and divided programming sheep from goats: he hadn't. Indeed in Mental models, Consistency and Programming Aptitude we had to admit that his test didn't do much to predict levels of performance in the first course: that is, it wasn't very good at dividing good programmers from bad.
But you are mistaken when you say that his result goes away when you subtract those with coding experience: it most definitely does not. In Meta-analysis of the effect of consistency on success in early learning of programming, a summary of Saeed's thesis, we show that it doesn't go away if you make any of the obvious subtractions: those who have programming experience, those who have programming education, those who get the 'right' answer to his questions. His result is robust and very odd: the test picks out a group much less likely to fail in a first programming course, but doesn't predict levels of performance.
In our latest paper Observing Mental Models in Novice Programmers, presented at PPIG 2012, we report a novel finding -- the test worked with a cohort of 14-year-old school students -- and a novel explanation, due to Anthony Robins, that the effect is due to a previously unsuspected cognitive obstacle at the beginning of programming study (essentially, some people trip over the obstacle and never get started, and Dehnadi's test picks out those that are already past that hurdle). This isn't the place to speculate what the obstacle might be, or what other obstacles might be found now we think we know what to look for.
I think you are quite right to point out that most programming 'teachers' have very little insight into teaching. On the face of it, Robins' explanation of Dehnadi's result suggests that what we need is mixed-ability teaching, making sure that the slowest don't get left behind. But on the other hand there have been similar difficulties in the teaching of mathematics for centuries, and a lot of research, and plenty of mixed-ability teaching, and the problems haven't gone away.
Code.org and the UK's Programming in Schools initiative are well-intentioned but, like you, I think they have little idea of what they are talking about. The last time we tried, in the age of the BBC Micro, to teach all children to program we found that some learnt and some didn't and the computer was taken up as a games machine. The same seems to be happening with Raspberry Pi. This isn't an easy problem to solve, and admonitions from Steve Jobs's ghost won't be enough.
Richard Bornat
re: programming....
ReplyDeleteI heard a report of a study into pain & anxiety (detected by a brain scan) caused by maths...
http://www.cbc.ca/news/technology/story/2012/11/05/math-science-anxiety-study-pain.html
''The study, ... , found that when the subjects were completing math problems, however, they did not show these pain responses.''
"We were especially surprised to see it during the anticipation period, and not the actual doing of the math," Lyons said.
BTW: There is an Algol68 group on linkedin.com you might consider.
When I first heard about this research, posted by Richard Bornat to the CPHC list several years ago, I guessed that it was closely related to something I had noticed over many years of teaching programming, and finding a small group of students who were very keen to learn, tried very hard, and were willing to sit with me in one-to-one sessions while I tried to go through elementary exercises with them.
ReplyDeleteThey frequently seemed to be getting the idea, and then, a few minutes later completely failed to make use of what they had already done, and could not add the next step required.
I reluctantly reached the conclusion with those students that there was some kind of short term memory capability missing, which required noticing and using relations between items and relations between relations that were in their memory (and in some cases also on a screen or sheet of paper in front of them).
I was never able to devise a form of explanation, or practice exercises that helped them.
Some of these were apparently highly intelligent students who obtained very good marks in other courses.
It seemed to me, after looking closely at the new test, that success in that test needed the kinds of short-term memory mechanisms that I had conjectured were missing or inadequate in my students.
In principle, it should be possible to make this conjecture more precise by building a working AI model of the sort of system of reasoning and discovery required, and then to devise more fine grained tests to find out what's going on in students who, for different reasons, have difficulty learning to program. For example, there were other students for whom the main obstacle was anxiety, lack of confidence, and the fear that they were inferior to other students. Sometimes those students were helped merely by learning that other students had the same anxiety, and, with the anxiety reduced or modified, they were able to go on and make good progress, some of them even discovering later that they had very high programming potential
After looking into a number of the blog posts on your web site, I honestly appreciate your technique of writing a blog. I bookmarked it to my bookmark website list and will be checking back soon. Take a look at my website as well and tell me what you think.
ReplyDelete