Chat with us, powered by LiveChat DESIGN A SURVEY: Researchers have long been interested in elements that comprise happiness. If you were to design a survey to determine levels of happiness, what would you include? T | EssayAbode

## 22 Sep DESIGN A SURVEY: Researchers have long been interested in elements that comprise happiness. If you were to design a survey to determine levels of happiness, what would you include? T

DESIGN A SURVEY:

Researchers have long been interested in elements that comprise happiness. If you were to design a survey to determine levels of happiness, what would you include? That is, what elements do you believe combine together to equal happiness? These will become your building blocks (predictive variables) with happiness being the outcome (criterion variable).

500-700 words

APA (no cover, no abstract, ONLY references needed)

Counseling Research: Quantitative, Qualitative, and Mixed Methods

Second Edition

Chapter 7

Predictive Designs

If this PowerPoint presentation contains mathematical equations, you may need to check that your computer has the following installed:

1) MathType Plugin

2) Math Player (free versions available)

3) NVDA Reader (free versions available)

1

Learning Objectives (1 of 2)

7.1 Describe the nature of predictive designs.

7.2 Describe the relationship between correlation and prediction.

7.3 Identify the types of correlation coefficients and under what conditions each should be used.

7.4 Interpret the magnitude and frequency of a correlation coefficient and explain what they mean in terms of research design.

7.5 Define the terms predictor and criterion.

2

Learning Objectives (2 of 2)

7.6 Explain the purpose of multiple regression and when it should be used.

7.7 List the assumptions of a multiple regression analysis.

7.8 Explain the purpose of factor analysis and when it should be used

7.9 List the various forms of factor analysis and explain how each affects the interpretability of you results

7.10 Describe how predictive designs can be applied to counseling research

Predictive Designs (1 of 2)

Predictive designs are a form of correlational research that use calculated information about the relationships between variables to forecast future outcomes.

Researchers estimate the likelihood of a particular outcome by using a certain set of variables.

Predictive Designs (2 of 2)

In addition to identifying variables that will predict a given outcome, predictive studies may also be used to examine the validity of assessment instruments or treatment protocols allowing practitioners knowledge that the instrument or technique being used is implemented accurately.

Variables

Criterion Variables: the outcome variable being studied.

Predictor Variables: used to estimate the criterion.

A typical predictive design includes a single criterion variable and any number of predictor variables.

Correlation and Prediction

The relationship between the predictor and criterion variables is often explained using correlation.

Correlation is a statistical technique used to determine the degree of relationship between two or more variables.

Correlations are based on covariance, or the degree to which two variables vary together.

Correlation

To calculate the correlation between variables first collect data on each variable in its natural state.

Correlational design does not manipulate or control the variables.

Correlation Coefficient

Correlational analysis of data is known as the correlation coefficient.

The correlation coefficient is denoted by r.

Correlation coefficient describes the relationship between 2 or more variables or sets of scores.

If changes in the value of 1 variable corresponds to a systematic change in the value of the other they have shared variance.

Direction of the Relationship

The direction of the relationship is determined by the valence sign preceding the correlation coefficient value.

Positive correlation is denoted by “+” and negative correlation is denoted by “−”.

Positive Correlation

In a positive correlation both variables tend to move in the same direction.

The fact that the variables change in the same direction indicates a positive relationship.

Negative Correlation

When variables trend in opposite directions a negative correlation exists.

As one variable increases in value, the other decreases.

Degree of the Relationship (1 of 2)

The degree, or strength of a relationship is determined by the numeric value of the correlation coefficient.

It provides a measure of consistency and predictability found in the association between two scores.

(Gravetter & Wallnau, 2013).

Degree of the Relationship (2 of 2)

Values for the correlation coefficient range from 0 to 1 in both the positive and negative directions.

A value of −1.00 represents a perfect negative correlation while a value of +1.00 represents a perfect positive correlation.

The strength of the relationship is based on how close the correlation coefficient is to the poles regardless of + or −.

Strength of Relationship

 Correlation Size of Association Strength of Association .10 −.29 Small Weak .30 − .49 Medium Moderate .50 −.69 Large Strong .70 and above Very large Very Strong

(Table: Rosenthal, 2001)

Because strength is related to how close the correlation approaches −1.00 or +1.00, correlation coefficient of −.90 indicates a stronger relationship than +.85.

A value of 0.00 indicates no relationship.

Coefficient of Determination

The coefficient of determination

is a measure of the

amount of variance in one variable that can be predicted from the other variable.

It is computed by squaring the correlation coefficient.

The larger the coefficient of determination the stronger the predictor is at estimating the criterion value.

Interpreting the Correlation Coefficient

Sometimes an apparent relationship may exist yet this relationship may not be significant.

The significance level of r provides a good measure of the consistency or reliability of the observed relationship.

The larger the sample the more reliable the correlation coefficient produced.

Spurious Correlations

A spurious correlation exists when an apparent relationship detected between two variables is really due to an unintended or confounding variable.

To reduce the occurrence of spurious correlation, it is recommended to use at least 100 participants when conducting correlational research.

Bivariate Predictive Models

The choice of design to use depends on the research questions the researcher is trying to answer and the type of data collected.

Of particular interest is whether the data is quantitative (interval or ratio) or categorical (nominal or ordinal).

Pearson Product Moment Correlation Coefficient

The most commonly produced correlation coefficient.

Often referred to as simply the Pearson r.

A Pearson r is computed when the data for both

Measured variables is quantitative in nature and a linear relationship exists between them.

Pearson Product Moment Coefficients

Solve the formula:

Convert each individual score on both variables (X and Y) to standardized z scores.

Multiply the z scores computed for X and Y for each participant.

Sum the products from step 2.

Divide the value you obtain in step 3 by N.

Spearman Rho Correlation Coefficient

The Spearman Rho, a variant of the Pearson r, is applied when measuring the linear relationship between two sets of data, one of which is recorded at the ordinal level.

Ordinal data is rank ordered based on magnitude or frequency and scores are assigned a ranking indicating place in a distribution of scores.

The correlation is computed using the difference in ranks between measures for each participant rather than actual scores obtained.

Point Biserial Correlation Coefficient

Variant of the Pearson r, the Point Biserial r sub p b is used when one set of data represents a continuous quantitative measure and the other a categorical or nominal measure.

The categorical data is for a dichotomous variable which has only 2 points and is either/or such as male/female.

Phi Coefficient

The Phi Coefficient left parenthesis pie right parenthesis is computed when both sets of data are nominal, dichotomous measures and placed in a contingency table.

To compute the Phi coefficient, data from the two dichotomous variables are placed in a contingency table.

Contingency Table

Contingency tables are a visual aid for presenting the participant responses to one variable as a function of the other.

A positive association is noted when most of the data falls along the diagonal cells A and D and negative off of the diagonal cells B and C.

 Blank Variable X (−) Variable X (+) Total Variable Y (−) A B A + B Variable Y (+) C D C + D Total A+C B+D A+ B+C+D

Multivariate Predictive Models

If two variables are highly related one variable may predict a pattern.

Prediction studies can represent the extent to which a relationship can be predicted and to test theoretical hypotheses related to predictors of a criterion.

May also be used to examine the predictive validity of assessment instruments.

Data Collection (1 of 2)

Subjects are selected that are pertinent to the study and based on availability to the researcher.

Instruments should be valid measures of the variable of interest.

Surveys, standardized tests, questionnaires, or observational methods can be used to measure the predictor variables and the criterion variables.

Data Collection (2 of 2)

The appraisal of the criterion variable must be valid.

The predictor variables must be measured before the criterion behavior pattern occurs in order to facilitate the claim that the measure predicted the pattern.

Data Analysis

The primary method of data analysis for a prediction study includes correlating the predictor variable with the criterion.

Because a grouping of variables usually results in a more accurate prediction than any one variable, studies often result in a predictive equation referred to as a multiple regression equation.

Multiple Regression (1 of 3)

A multiple regression equation uses all variables that independently predict the criterion to create a more accurate prediction.

Predicted scores are typically paced in a confidence interval.

Prediction equations may be formulated for each of a number of subgroups and a total group.

Shrinkage

Prediction studies can produce initial equations that may be the result of a chance relationship that will not be found again with another group of subjects.

Shrinkage is the tendency for predictive validity to decrease when the research study is repeated.

(Gall, Gall & Borg, 2006).

Regression Analysis

The object of regression analysis is to help predict a single dependent variable from the collected data of one or more independent variables.

When a single independent variable predicts a single dependent variable the statistical technique is referred to as simple regression.

Multiple Regression (2 of 3)

Problems involving two or more independent variables predicting a single dependent variable is referred to as multiple regression analysis.

In a multiple regression equation, variables that are known to individually correlate with the criterion are used to make a more accurate prediction.

Multiple Regression (3 of 3)

Multiple regression is one of the more commonly used techniques in educational research.

Can be used with data representing any scale of measurement and can be used to interpret the results of experimental, causal comparative and correlational studies.

It determines the existence of a relationship and the extent to which variables are relational including statistical significance.

Collinearity

Collinearity is the relationship, or correlation between two independent variables.

Multicollinearity refers to the correlation between three or more independent variables.

Assumptions

Multiple regression shares all of the same assumptions of correlation:

Linearity of relationship

The same level of relationship throughout the range of the independent variable

Interval or near-interval data

Data whose range is not truncated.

(Black, 1999; Heppler, Kivlighan, & Wompold, 1999; Hair, Anderson, Tatha & Black, 1998)

Hypothesis Testing (1 of 2)

To test hypotheses statistically, the following assumptions are made:

Independence: the scores for any particular subjects are independent of the scores of all other subjects.

Normality: In the population, the scores on the dependent variable are normally distributed for each of the possible combinations of the levels of the X variables.

Hypothesis Testing (2 of 2)

Homoscedacity: In the population, the variances of the dependent variable for each of the possible combinations of the levels of the X variables are equal.

Linearity: In the population, the relations between the dependent variable and an independent variable is linear when all other independent variables are held constant.

Terms (1 of 3)

Regression Coefficient: the numerical value of any parameter estimate that is directly associated with the I V.

Correlation Coefficient (R): indicates the strength of the association between the D V and I V.

Coefficient of Determination

measures the

proportion of the variation of the D V that is explained by the I V.

Dummy Coding: recoding categorical variables into a number of separate dichotomous variables.

Steps in a Multiple Regression Analysis

4 Step process:

The regression model will be determined through identification of the regression coefficients.

Determine the multiple correlation coefficient (R) and the proportion of shared variance

Determine the statistical significance of the multiple R.

Examine the significance of the predictor variables and test individual regression coefficients for statistical significance.

Types of Multiple Regression

Least-squares regression: the most common type of multiple regression.

Utilized when the measure of the criterion variable is a continuous scale, the measure of the predictor variables are continuous or categorical scales, and the relationship between the predictor variables and the criterion variable is linear.

Variations of Least-Squares Regression (1 of 2)

Step-up Multiple Regression: also called forward, the predictor that leads to the biggest increase in R is added to the existing group until the addition no longer leads to a statistically significant increase.

Step-down Multiple Regression: also called backward, all likely predictor variables are entered into the equation first, then systematically the variable that results in the lease decrease in R is removed until a statistically significant decrease occurs.

Variations of Least-Squares Regression (2 of 2)

Stepwise Multiple Regression: combines the forward and backward approach. Although popular, stepwise regression has been found to have significant problems in use resulting in incorrect calculations of variance

(Antonakis & Dietz, 2011; Thompson, 2013)

Discriminant analysis: This is utilized when the measure of the criterion variable is categorical and the predictor measures produce continuous scores.

Logistic Regression: Used when the predictor measures are continuous or categorical and the measure of the criterion variable is dichotomous.

Nonlinear Regression: Used if a hypothesis exists that suggests a curvilinear relationship between the predictor variables and the criterion variable.

Cautions in Using Multiple Regression

The existence of a predictive relationship is not equal to a causal relationship.

Sample size is critical when choosing the number of predictor variables to be included in the study.

Rule of thumb: use a minimum of 15 participants for each variable included in the regression analysis.

(Gall, Gall, & Borg, 2003)

Factorial Designs

A factorial experiment is a study determining the effect of two or more independent variables both singularly and interacting with each other on a dependent variable.

Involves two or more independent variables at least one of which is manipulated by the researcher.

Studies patterns of relationships among D V with the goal of discovering something about the I V that affect them without directly measuring the I V.

Terms (2 of 3)

The effect of each of the I V on the D V is called the main effect.

The interaction of the effect of two or more variables on the D V is called the interaction effect.

A fixed factor is an I V whose value will not be generalized beyond the experiment.

Variance is a measure of the extent to which scores in a distribution deviate from the mean.