On Friday March 29, 2019 ACASE Associates and UAlbany PhD Students Panpan Read more →

# Analysis Procedures

PROCEDURES FOR TESTING HYPOTHESES

Overview

A parametric analysis of the data was conducted. Hypotheses were tested by comparing least squares models of the data as described in Introduction to Linear Models (Ward and Jennings, 1979). This Appendix explains the analysis procedures.

The Form of Linear Models

Linear models are equations formed by linear combinations of vectors. A vector is an ordered set of real numbers (called scalers). A linear combination of vectors is one in which the vectors can be multiplied (i.e. weighted) by scalers and added together (see Green & Carroll, 1976, Ward & Jennings 1979). The models express proposed relationships between variables in the data set. The models can be compared to determine which constitutes a better fit to the data. Linear models take the following general form:

Y = a_{0}U + a_{1}X_{1} + a_{2}X_{2} + … + a_{k}X_{k}

Where:

Y is a vector of criterion variable scores (The criterion is the variable which is to be explained. In this study Final Level of Conceptualization (LOC) is the criterion variable.

U is a vector of 1’s

X_{1} is a vector. Each element of the vector is a value for the variable X_{1} associated with a corresponding value in Y.

X_{2} is a vector. Each element of the vector is a value for the variable X_{2} associated with a corresponding value in Y.

X_{k} is a vector. Each element of the vector is a value for the variable X_{k} associated with a corresponding value in Y.

a_{i}‘s are weights on the vectors

The components of the X vectors may also be polynomial transformations i.e. squared, cubic etc. transformations of the original values of the X variables (Edwards, 1986; Ezekiel & Fox, 1967, Neter, Wasserman & Kutner, 1983)

The weights i.e. the a_{i}‘s have interesting properties. Each weight on an X is the amount of change in the criterion Y for a unit change in X. Thus they are slopes. Larger slopes are associated with greater effects.

a_{0} the weight on the unit vector orients the X’s to a certain level on Y, such that the value of a_{0} is the value of Y when the values of all the X’s are zero. In geometric terms, a_{0} is the intercept with the Y of a surface formed by the weighted X’s. The surface is a representation of the relationship between the X’s and Y . See Figure J.1

Figure J.1 Illustration of Slopes and Intercepts for Linear Models

Y is called the `criterion variable’. It is composed of scores whose variability (or variance) is to be explained or accounted for (e.g. Final Level of Conceptualization). Each of the X’s is composed of values that correspond to scores on Y. The a’s are weights applied to these predictor variables. The X variables are used to account for or explain variation on the criterion variable. The X variables or predictor variables in the present study are measures of Prior Knowledge and Scientific Thinking. The properties of these weighted variables within the models will determine the nature of hypothesized relationships between the criterion and predictor variables. The ability to `estimate’ the values of Y, in terms of some function of aX’s, within some specified level of tolerance, is taken as the indicator of success in explaining or accounting for variation on the criterion variable.

The variability of Y can typically not be completely accounted for as a function of aX’s. Instead we have imperfect explanation of variation on the Y or criterion variable. This imperfection in accounting for Y is expressed as . The function now is expressed as follows:

= a_{0}U + a_{1}X_{1} + a_{1}X_{2} + … + a_{k}X_{k}

While this constitutes perfect prediction of , it leaves Y, the criterion variable, imperfectly accounted for. The difference between Y and , symbolized as E is the errors of prediction of Y. The magnitude of this difference is seen as an indicator of the degree of failure of the model to explain variation on the criterion variable. Mathematically one can optimize prediction by weighting each of the predictor variables so as to minimize Y minus or E — the errors of prediction. The sum of the squares of Y – is the term to be minimized and the process of derivation of weights on predictor variables to minimize E in this fashion is called `least squares regression’. When multiple predictor variables are used the process is called `multiple regression’.

The model now takes the form:

Y = a_{0}U + a_{1}X_{1} + a_{1}X_{2} + … + a_{k}X_{k}+ E

or

Y = + E

Where Y is a weighted sum of the X_{i}‘s and E. The a_{i}‘s being weights on the predictor variables derived so as to minimize the squared sum of the values in E (i.e. to minimize the errors in predicting Y from ).

Linear Models of the Role of Prior Knowledge and Scientific Thinking in accounting for Discovery of Scientific Concepts

Linear models were constructed in which the final Level of Conceptualization served as criterion variables. This variable alone does not constitute an indicator of discovery because it does not take into account growth or change in the subjects’ conceptualization. In order for the models to have the property of representing change or discovery a `pretest’ — the student’s Initial Level of Conceptualization was included in each model. Its role in the models is to account for variation in the criterion that can be attributed to entry level conceptualization of the task and the phenomenon. Additional variables included in the model must then account for criterion performance over and above what is accounted for by the pretest. A variable that accounts for criterion variance in the presence of ILOC can thus be seen as accounting for growth or change that is — accounting for Discovery. Thus Initial Level of Conceptualization is a control for prior knowledge This role is equivalent to the role of the `co-variate’ in an Analysis of Covariance. All the tests of hypotheses in this study follow this conceptual model.

An illustration of the procedure for testing hypotheses

To illustrate the procedure to be used for testing hypotheses consider the following simplified model which is expressed in terms of the major variables in this study:

Y = a_{0}U + a_{1}I + a_{2}S + E

Where:

Y is composed of final Level of Conceptualization scores for individuals (LOC)

U is a vector of 1’s

I is composed of corresponding Initial Level of Conceptualization Scores (ILOC) for the same individuals. It is an indicator of prior knowledge related to the phenomenon and the task

S is composed of scores on some simple or composite measure of scientific thinking

E is Y – an indicator of error in predicting Y from the vectors a_{1}I and a_{2}S.

a’s are weights on the variables I and S chosen to minimize the sum of the squared values in E.

Using linear models to test hypotheses

The method to be used for testing hypothesis involves the comparison of linear models. Comparisons are made between what are called `full’ and `restricted’ models. Each model is associated with an amount of error (the sum of squared errors). Hypotheses are expressed as differences in the amount of error associated with each model, and the significance of the differences is tested by a statistic called the F test. Each model is also associated with the percent of criterion variable variance accounted for (expressed as R^{2}). Hypotheses may also be tested, with the F statistic, comparing the differences between the R^{2}values of each model. These two methods yield equivalent results.

The Full Model

A `full’ model of the data is expressed as a linear combination of predictor variables. (In this study full models were developed which contain Final Levels of Conceptualization as the Criterion variable, Initial Levels of Conceptualization acting in the role of control variable and various other variables which serve as indicators of scientific thinking). Least squares regression coefficients were computed to weight each of the predictor variables in the model (Initial Levels of Conceptualization is a predictor). A correlation coefficient is calculated between the values resulting from the linear combination of weighted predictors and values for corresponding individuals on the criterion variable — R_{ Y}. This is called the multiple correlation coefficient. The square of this coefficient R^{2}_{ Y} is in inverse ratio to the error of prediction in the model. R^{2}_{ Y} represents the proportion of criterion variance that is accounted for by the linear combination of weighted predictors.

The Restricted Model

Hypotheses are specified by expressing `restrictions’ upon the full model. The restriction constitutes simplification of the mathematical expression of the full model. For example, in the present study, restrictions imposed on the full model consist in setting weights on targeted predictors to zero. A zero weighting on a predictor removes the influence of that predictor from the regression equation. This constitutes a hypothesis of no relation between that predictor and the criterion variable in the presence of the other predictors. This is a hypothesis of no effect of the predictor and is called the `null hypothesis’. The imposition of such a restriction results in a simpler model which is called the `restricted model’. A squared multiple correlation coefficient R^{2}_{ Y} is calculated for the restricted model. The restricted model always has a larger error term and a smaller R^{2} than the full model.

Testing a hypothesis

The R^{2}‘s for the full and restricted models are then compared. If the difference between them is significant then the proportion of variance associated with the variable that was excluded can be considered significant and the hypothesis of no effect of that variable can be rejected.

For example, taking the model presented above as a `full’ model:

= a_{0}U + a_{1}I + a_{2}S

The test for effects of Scientific Thinking on Discovery is accomplished by hypothesizing that the weight a_{2} in the full model is equal to zero.

By substituting the value zero for a_{2} in the full model, and simplifying algebraically, the restricted model is generated:

= a_{0}U + a_{1}I +

The F test

An F statistic is calculated to determine whether the hypothesis is to rejected or retained. Associated with each F value is a probability of rejecting a true hypothesis. The probability value (P) is the level of significance and ranges from 0 to 1. Tests conducted at a .05 level of significance have a probability of falsely rejecting a true hypothesis in 5% of the instances. F tests may be conducted on comparisons of error terms or R^{2} values with equivalent results.

The Starting Full Model

A starting model was created with properties needed to test the full range of hypotheses of interest. To test for linear effects of Scientific Thinking and Prior Knowledge vectors of predictor variables composed of total scores on the measure of Proposed Scientific Thinking Characteristics (STC) for each task,and separate vectors containing Initial Level of Conceptualization scores for each task were included. Vectors composed of cross-products of ILOC were included to test for interaction effects of these two variables. To permit for tests of hypotheses of curvilinearity, vectors were included which contained 2nd and 3rd degree polynomial transformations of the STC variable. Vectors allowing for interaction of ILOC with these 2nd and third degree transformations of STC were also included. The criterion vector Y was composed of final LOC scores on each of the tasks.