Although the study has a small sample size (n=35), the sample is sufficient to test a number of the proposed hypotheses.
Motivation is operationalized using the conscientiousness scale of the BFI. This scale “describes socially prescribed impulse control that facilitates task- and goal-directed behavior, such as thinking before acting, delaying gratification, following norms and rules, and planning, organizing and prioritizing tasks” (John, Naumann, & Soto, 2008, p. 120).
Intuitively, this seems like a critical quality in successful students in general. It is surprising then to discover that this scale does not correlate with academic performance in learning to write computer software and may even negatively correlate with success. This finding contradicts that of Allen and Robbins (2010) who found that motivation did correlate strongly with first-year performance. However, Allen and Robbins defined success as completing a program in the nominal period of the certificate or diploma of study. Using this definition, a student completing a two year diploma in three years would be deemed “unsuccessful.” A number of conjectures are possible to account for this finding: It may be that students who are less conscientious or motivated may still be successful at their course work but take fewer courses per semester of study. Thus their grades per course could still be high but they are delaying the completion of their program. Alternatively, motivation as a personality trait may not be a significant factor in successfully learning to develop computer software.
Comparing students’ preferred learning styles with their academic performance showed no significant correlations. This contradicts previous findings by Thomas et al. who found that reflective and verbal learners became better programmers (2002). That study used a sample of 107 computer science students at the University of Wales. However, this contradiction might be explainable. Academically oriented institutions such as universities and degree granting colleges often rely heavily on lecture based content delivery—the very sort of model that would appeal to students from an abstract, sequential, reflective or verbal oriented learner. Community colleges and institutes of applied learning such as Lethbridge College often utilize a wider variety of pedagogical methodologies and didactic techniques. These practices often use multimedia, group activities, project based learning and problem solving activities—in essence, the very strategies that might resonate with active, visual, random, or global learners—in addition to teaching approaches that may be more favourable to abstract, sequential, reflective or verbal learners.
Balduf reports that almost all students entering a post-secondary institution for the first time were not adequately prepared in terms of study skills, time management abilities and motivation (2009). This finding gave rise to the decision to retest this hypothesis in the current study. In comparing the sample used by Balduf and the current study a number of significant differences emerge: Balduf used as her population, 83 students who were on academic probation from their college, a small degree-granting institution. Only seven students, or 8.6% of this population, agreed to participate in her study. It is possible that, when she chose to interview these subjects, they were more interested in rationalizing their poor academic performance by attributing it to poor time management or lack of preparation by their high school environment. A better referential study is the one conducted by George et al. In this study of a random sample of undergraduate students, a number of personal behaviours and time did correlate with higher GPA.
The fact that the current sample showed no correlation between a self-assessment on time management ability and academic performance may have a number of attributions. It may be that students are not able to effectively assess whether they have good time management skills and strategies. The students in the current study were in the first two weeks of the first semester of their post-secondary program and may not have been in a position yet to assess their time management skills and abilities vis a vis the demands of a post-secondary environment.
If we turn our attention to how students spend their recreational time, we see that, except for volunteerism, the amount of time students spent in any recreational activity was negatively correlated with their sense of time management ability as measured by the 11 question survey. Volunteerism may be an aberration since 26 of the participants reported spending no time volunteering and 7 reported spending between 0 and 2 hours per week. The remaining 2 participants reported spending between 2 and 5 hours and between 10 and 20 hours per week respectively.
While time management is an important factor in academic success, it is a poor predictor of that success. The exception may be computer game playing. Time spent playing computer games correlated very strongly with performance on the first exam (p<0.01) and still showed a strong relationship with academic performance over the entire semester (p@0.062). 6 of the 35 participants reported spending more than 20 hours a week playing computer games. This finding is surprising; anecdotally, many professors can list at least one student who became so involved in computer games that their academic grades suffered. Perhaps those students most passionate about computer games may also be passionate about computer programming—the common thread here being passion around computers and technology. At the same time, it should be noted that time spent on social networking sites was not positively correlated with academic performance. Further work will be required to verify or understand these findings.
The most significant direction to pursue in predicting students’ ability to learn computer programming skills seems to come from the area of assessing problem solving ability. However, not every logical skill or ability is an equally reliable predictor. The pre-study survey included 12 questions related to logical problem solving and critical thinking ability. These tests can be grouped into a number of categories:
Gaming Problems |
These are questions in which the thinker is asked to calculate the probability or optimum cost or benefit from a course of action or to be consistent in their calculation of this benefit. |
Problems 1, 3, 5, and 8 |
Decision Trees |
The thinker constructs and works through a decision tree that allows the participant to pose the necessary questions to determine the truth of the entire network. These are the Boolean paradigms discussed by Goodwin and Johnson-Laird (2010) or the disjunctive problems identified by Toplak and Stanovich (2002). |
Problems 2, 7, and 10 |
Rule Based Deduction |
The thinker applies formal inferential logic, rule based analysis or what Newstead et al. call “Analytical Reasoning” (2006), or basic algebra to solve the problem. |
Problems 4, 6 and 12 |
Problem Modeling |
The thinker takes the problem text and envisioning the problem in such a way that a mental model or schema forms that allows them to envision the problem in a new way. They then find the solution almost trivial to solve. |
Problems 9 and 11 |
Among these groups, gaming problems showed no correlation with academic success leading to the conclusion that this is not a type of critical thinking useful to students learning to write computer software.
Decision tree problems show more promise. These problems require the thinker to pursue multiple independent analyses of the problem and then see if there is any commonality in the conclusions. For example, the knight vs. knave problem requires the thinker to begin by allowing inhabitant A to be both a knight and a knave (Figure 2). Only by pursuing both decision branches and determining that they both share the same result is a conclusion to the problem possible. This problem was a significant predictor of academic success.
Problems 2 and 10 are also examples of this type of problem but did not show statistical significance. These problems were replicated from Toplak and Stanovich’s study in which the two deductive conclusions were presented but participants were also given the option to claim that no solution was possible (2002). This option was not presented in Problem 7. As a result, 91% of participants chose this option when confronted with problem 2 and 74% chose it as their solution to problem 10. Stanovich, Toplak and West have since referred to this behaviour as “cognitive miserliness,” the phenomenon that when solving a problem becomes “expensive” in terms of cognitive effort, many people give up rather than work out the possible conclusions (2008).
If this option wasn’t presented in these questions, it is likely that participants would have felt forced to work through the problem to the point where they could decide upon a conclusion instead of choosing the cognitively miserly option.
This group of problems is solved by applying the provided or implied rules to arrive at a solution. Their challenge often stems from the thinker’s ability to parse the semantics of the problem, thereby isolating the relevant facts or rules. For example, in problem 12, the floor allocation problem, the thinker would go through each set of rules against the proposed options and discard an option when and if a rule is found to be violated. Similarly problem 4 can be solved using simple algebra but does involve a two part thought process similar to the disjunctive problems. The card problems in question 6 use the rules of logical inference to identify which cards to turn over however these rules may not be as well known to first year students as the rules of algebra. Problem 12 was strongly correlated with academic performance (p=0.021), and problem 4 had a slightly weaker correlation with p=0.03; problem 6 had a much weaker correlation (0.273, p=0.056).
This group of problems provides insufficient rules or information for the thinker to deduce the solution directly. As a result, the thinker must create a mental schema or model which simplifies the problem before solving it. For example, the thinker must start with a mental picture of a pond completely full of lilies (day 48) and then work backwards to understand that it will be half-full one day earlier. This sort of cognitive hinting is not present in the problem text but is the only way to solve this problem. Similarly, solving the widget problem using algebra is too intensive for many people; the most efficient way to solve this problem is for the thinker to picture a group of five machines each turning out a widget every five minutes, and then realizing that each of a hundred machines would still take five minutes to turn out a widget. Both these problems were very significant (p<0.01) in their correlation with academic performance, indicating that this sort of thinking is critical to success in learning computer programming.
Links
[1] https://niedermayer.ca/user/login?destination=node/114%23comment-form
[2] https://niedermayer.ca/user/login?destination=node/115%23comment-form
[3] https://niedermayer.ca/user/login?destination=node/116%23comment-form
[4] https://niedermayer.ca/user/login?destination=node/117%23comment-form
[5] https://niedermayer.ca/user/login?destination=node/118%23comment-form
[6] https://niedermayer.ca/user/login?destination=node/119%23comment-form
[7] https://niedermayer.ca/user/login?destination=node/120%23comment-form
[8] https://niedermayer.ca/user/login?destination=node/121%23comment-form