## 19 Sep Which variable has the largest correlation with the % of Single Parent Families (spf)? You will answer this question in four parts a) Please list the variable name for the variable wit

**Overview**

The Stat output below provides a description of the four variables used in this analysis and below that, a correlation matrix. For the next few questions, you will be interpreting the correlation matrix

**Question 1**: which variable has the largest correlation with the % of Single Parent Families (spf)? You will answer this question in four parts

a) Please list the variable name for the variable with the highest correlation with % of Single Parent Families.

b) Please list the correlation coefficient

c) Please provide an effect size for the correlation

d) Please provide an explanation as to why these two variables are correlated (1-2 sentences)

**Question 2 What is the correlation between % parents with nonsecure employment (nsemp) and the child poverty rate (chpov)?**

a) Please list the correlation coefficient

b) Please provide an effect size for the correlation

c) Please provide an explanation as to why these two variables are correlated (1-2 sentences)

**Question 3 Which variable has the largest correlation with food insecurity (foodsec)?**

a) Please list the variable name for the variable with the highest correlation with food insecurity (foodsec).

b) Please list the correlation coefficient

c) Please provide an effect size for the correlation

d) Please provide an explanation as to why these two variables are correlated (1-2 sentences)

The scatterplot shows the relationship between % parents with nonsecure employment and food insecurity.

a) Does the scatterplot show a positive or negative relationship? Please explain in a sentence or two

b) With a scatterplot, do we put the independent variable on the x axis or y axis?

c) Can we get a precise estimate of correlation from the scatterplot? Please explain

d) Can we get a precise estimate of the effect size from a scatterplot? Please explain

**Scatterplots and Correlation**

**Scatterplots**

Scatterplots show the relationship between two (usually) continuous variables. Recall that continuous have many different numeric values; age or income are examples. Scatterplots are very useful for data visualization because they can give us an intuition for the **direction of the relationship** between variables (positive or negative) and the **strength of the relationship**. Usually, we are interested in both things.

With a scatterplot, we normally assume that one variable is the **independent variable. **Most researchers denote the independent variable as X **.** The independent variable is the input to the model. The **dependent variable** is the output from the model. One way to keep these straight is the * dependent variable is dependent on another variable in the model, the independent variable*. Researchers denote the dependent variable as Y **. **Just like in the alphabet, X comes before Y, meaning a change in X results in some change in Y. In some cases, the independent variable X may be a “cause” of the dependent variable Y, but in most cases, causation is difficult to establish. We discuss the distinction between correlation and causation toward the end of the chapter.

In the examples below, we will be using the State Kids Count data. In each example, the **dependent **variable is the infant mortality rate (imr) for both scatterplots. We will construct two scatterplots using two different **independent** variables: the percentage of low-birth-weight babies in each state and the median family income in the state. Figure 1 shows the scatterplot for infant mortality (y axis) and low birth weight babies (x axis).

** **

**Figure 1: The relationship between low birth weights and infant mortality**

Here low birth weight is on the x axis and the infant mortality rate is on the y axis. This scatterplot helps answer two questions.

1) **Direction of Relationship**. The graph shows there is a **positive relationship** between low birth weights and the state infant mortality rate. As low birth weights increase, so does infant mortality. This makes sense, as low birth weight babies are often premature or have other health difficulties, making survival less likely. So, it makes sense that states that have a high percentage of low birthweight infants, would also have higher overall infant mortality rates.

2) **Strength of the Relationship**. The way to determine the strength of the relationship in a scatterplot is to look at how tightly (or loosely) the data points cluster around the line. This line is the “best fit” line for the data. This graph shows a strong relationship between low birth weight and infant mortality but interpreting graphs can be a bit like interpreting art! It is important to note that while the direction of the relationship is usually easy to figure out, determining the strength of the relationship from a scatterplot alone is a subjective judgment.

** **

**Figure 2: The relationship between median family income and infant mortality**

1) **Direction of Relationship.** The graph shows there is a **negative** relationship between state median family income and the state infant mortality rate. In states with higher median family incomes, there is less infant mortality. This also makes sense: in states with higher family incomes, more private resources are available throughout the pregnancy, which reduces infant mortality.

2) **Strength of the Relationship.** The way to determine the strength of the relationship in a scatterplot is to look at how tightly (or loosely) the data points cluster around the line. In this respect, the data fit the line well, but not as well as the scatterplot in Figure 1. But again, such an interpretation is inherently subjective.

**The Correlation Coefficient**

Scatterplots are helpful for visualizing the association between X and Y, but graphs cannot provide a precise numerical estimate of the relationship between X and Y . The numerical estimate of the relationship between X and Y is called the correlation coefficient, it is sometimes denoted as **r** in published research. Correlation coefficients tell us both the direction of the relationship between X and Y and the strength of the relationship. The correlation coefficient is easy to interpret once we understand its properties.

Box 1: Properties of the Correlation Coefficient

**Correlation Coefficient Property 1**: r will always indicate a positive or negative relationship through its sign.

**Correlation Coefficient Property 2:** r will always lie within a defined range between -1 and 1. r is a **normalized** measure. This means that r does not depend on the scale of measurement for a variable. For example, age and income are measured on different scales, but r is not affected by the scales, it will always be between -1 and +1.

**Correlation Coefficient Property 3: **r is bidirectional. This means that the correlation between X and Y is the exact same as the correlation between y and x. In other words, the “ordering” of the independent and dependent variable is irrelevant to the value of r.

**Correlation Coefficient Property 4:** r measures the strength of the **linear** relationship between X and Y. That means it measures how well the data fit along a straight line. R is also an **effect size** measure.

**Correlation Coefficient Effect Size**

**Property 4** says that **r** measures the degree to which the data fit along a single straight line. But what does an r=0.58 or an r=-0.10 tell us? Is this a large effect? This brings in the concept of **effect size**. Effect sizes tell us how strong the relationship is between variables. Effect sizes help to answer the question of **substantive significance **(McCloskey, 1996). Cohen (1988) offers this guidance for benchmarking r. Note that whether r is positive or negative, the effect size is the same.

**Table 1: Cohen’s Effect Size Benchmarks for r**

r Value (-) |
r Value (+) |
Effect Size |

-0.1 to- 0.3 |
0.1 to 0.3 |
Small |

-0.3 to- 0.5 |
0.3 to 0.5 |
Medium |

-0.5 to -1.0 |
0.5 to 1.0 |
Large |

We can now answer the question as to what an r=0.58 means in terms of effect size. Using Cohen’s benchmarks, 0.58>0.50, so we concluded that there is a large effect size, or in other words, a strong relationship between X and Y. And r=-0.10=0.10, which is a small effect size, or equivalently a weak relationship between X and Y.

**Correlation Coefficients for Infant Mortality, Low Birthweight and Median Family Income**

The Stata output below is called a **correlation matrix**. Correlation matrices show us how each variable is correlated with another. This matrix only contains three variables: imr (infant mortality rate), lobweight and mhhif (median family income).

The first thing you’ll notice is the three ones in the diagonal. This is because those cells in the matrix report the correlation of the variable with itself.

**Figure 3: Correlation Matrix for Infant Moraliity Data**

The correlation between infant mortality and low birth weight is 0.66 (rounded). Based on Cohen’s benchmarks, anything above r=0.5 is considered a large effect size. Therefore, we conclude that the correlation shows a strong relationship between the variables. The correlation between infant mortality and median family income is -0.59. Because 0.59 exceeds Cohen’s 0.5 benchmark for a large effect size, it is also a large effect size. Notice that the matrix also reports the correlation between low birth weight and median family income as -0.47. This correlation would be classified as a medium effect size because it is in between 0.3 and 0.5.

**Correlation and Causation**

Correlation does not necessarily mean causation. Correlation can only establish that two variables are related to one another mathematically. Consider a simple example where a researcher is looking at the relationship between snow cone consumption and swimming pool accidents. The researcher finds that there is a positive correlation between snow cone consumption and swimming pool accidents. Are we to conclude that eating snow cones ** causes ** swimming accidents? Here the relationship is not causal even though a correlation exists. Correlation cannot establish causation. Instead, researchers must use theory to explain and justify why correlations exist between variables.

**Review**

· Scatterplots show the relationship between two continuous variables

· The correlation coefficient r measures the linear association between two variables

· The sign tells us the direction of the relationship

· The effect size can be determined by using Cohen’s effect size benchmarks

· Usually, correlations are displayed in a correlation matrix that shows the pairwise correlation between the variables

· Correlation matrices are an easy way to see how all the variables in a list are related.

· Correlation cannot establish causation

Stata Code

*Scatterplots and Correlation

* This Code Uses the Annie E. Casey Foundation Data

*Figure 1

twoway (scatter imr lobweight) (lfit imr lobweight)

*Figure 2

twoway (scatter imr mhhif) (lfit imr mhhif)

*Correlation Matrix

correlate imr lobweight mhhif

2

## image1.emf

## image2.emf

## image3.emf

,

**Two-Variable (Bivariate) Regression**

In the last unit, we covered scatterplots and correlation. Social scientists use these as descriptive tools for getting an idea about how our variables of interest are related. But these tools only get us so far. Regression analysis is the next step. Regression is by far the most used tool in social science research.

Simple regression analysis can tell us several things:

1. Regression can estimate the relationship between x and y in their **original units of measurement. **To see why this is so useful, consider the example of infant mortality and median family income. Let’s say that a policymaker is interested in knowing how much of a change in median family income is needed to significantly reduce the infant mortality rate. Correlation cannot answer this question, but regression can.

2. Regression can tell us how well the independent variable (x) explains the dependent variable (y). The measure is called the **R square.**

**Simple Two-Variable (Bivariate) Regression**

Regression uses the equation of a line to estimate the relationship between x and y. You may remember back in algebra learning about the equation of a line. Some learned it as Y =s X + K or Y = mX + B. In statistics, we use a different form:

**Equation 1:** Y = B0 + B1X + u

Let’s define each term in the equation:

· Y is the dependent variable. It is placed on the Y (vertical) axis. In the example below, the dependent variable (Y) is the infant mortality rate.

· B0 is the Y intercept. B0 is also referred to as “the constant.” B0 is the point where the regression line crosses the Y axis. Importantly, B0 is equal to the **predicted value of Y** **when X=0**. In most cases, B0 is does not get much attention for two reasons. First, the researcher is usually interested in the relationship between x and y. not the relationship between x and y at the single value of x=0. Second, often independent variables do not take on the value zero. Consider the AECF sample data. There are no states with low-birth-weight percentages equal to zero, so we would be extrapolating beyond what the data tell us.

· B1 is usually the main point of interest for researchers. It is the slope of the line relating x to y. Researchers usually refer to B1 as a slope coefficient, regression coefficient or simply a coefficient. **B1 measures the change in Y for a one-unit change in x**. We represent change by the symbol ∆.

B1 =

· u is the error term. The error term is the distance between the regression line and the dots on the scatterplot. Think about it, regression estimates a single line through the cloud of data. Naturally, the line does not hit all the data points. The degree to which the line “misses” the data point is the error. u can also be thought of as __all the other factors that affect the infant mortality rate besides X__. Importantly, we **assume that u is totally random given X.**

**The Black Box of Regression**

Intuitively, regression analysis finds the line that is the best predictor of the dependent variable. In the scatterplot, this line is the one that “fits” the data the best. From the scatterplot, we can see that the line does not go through all of the points in the scatterplot. So, how does regression find this line? Regression does this by finding the line that __minimizes the squared error.__ This is why regression is also called “least squares” regression, because it minimizes the squared error. The mathematical proof of this is not important, if we understand that the regression line is the best fit for the data.

**The Predicted Value of Y, “yhat”**

This is the estimated regression equation for the line that relates infant mortality to low birth weight. Notice that this equation does not contain an error term. **This makes sense, because this is the equation for the regression line itself, not the actual data points (Y). **

To make this distinction clear, define the term ** **Ŷ as the predicted values of Y along the regression line. Ŷ is the predicted value of Y.

**Equation 2: **Ŷ = B0 + B1X

Subtracting the two gives:

Y = B0 + B1X + u

minus Ŷ = B0 + B1X

Y- Ŷ = u

This means each observation has values for Y, Ŷ and u. To make this more concrete, let’s consider the example of infant mortality and low birth weights.

**Example: Infant Mortality and Low Birth Weights**

For regression (unlike correlation), the researcher must specify the dependent variable and the independent variable. Logically, low birth weights should contribute to the infant mortality rate.

This makes sense too if we think about how the regression equation works. To make things concrete, let’s say that a lawmaker wants to know what effect low birth weights have on infant mortality. The regression equation would be:

imr = B0 + B1lobweight + u

The Stata output has a lot of numbers. First let’s focus on getting the actual estimates from the regression equation. We get these numbers from the “coefficient column.

The bottom coefficient is labeled _cons. This is short for “constant,” which is just another name for the y intercept, B0. In this case, B0 = 1.205.

The coefficient labeled lobweight is the one we are really interested in. This coefficient is B1. For this regression B1=0.562.

Now we can write out the regression:

imr = B0 + B1lobweight + u

Substituting the numbers from the table:

imr = 1.205 + 0.562 lobweight + u

Interpreting the equation

B0 is usually not of interest to the researcher for reasons discussed above.

B1 is the main coefficient of interest, especially for policy. It tells us about the relationship between low birth weights and the infant mortality rate.

Rules for Interpreting B1

· B1 measures the change in Y that results from a one unit change in X.

· So, we can say that ** a one unit change in X results in a B1 change in Y **.

· In the regression above B1=0.562. That means that a one unit change in percentage low birth weights results in a 0.562 change in the infant mortality rate.

The user-written Stata command aaplot. Gives a nice summary:

Model Fit

We already saw with scatterplots and correlation that different models have different degree of “fit”, meaning how well the data cluster around a line.

In regression, most analysts use the R Squared. The R Squared has a ready interpretation once we know its properties:

Box 1: R Squared Properties

R2 Property 1: R square measures the proportion of the variation in Y that is explained by the variation in X. An easier way to say it is that the model explains (R2*100)%. For the running example, the R2=0.436. That means that low brth weights explain 43.6% of the variation in the infant mortality rate. Or, for shrt, the model explains 43.6%.

R2 Property 2: R square will always (except in extreme and unusual cases) lie somewhere on the interval between 0 and +1. In other words, R squared will be a positive value between 0 and 1.

R2 Property 3: R squared values are only comparable ** if the dependent variable is the same. **This means that if we want to compare two models on the R squared, Y must be the same for both models.

**Effect Size for R Squared**

As with correlation coefficients, it is helpful to have a benchmark to determine effect size. Recall that effect size tells us how large (or small) the effect of one variable is on another. We can use the benchmarks for r and square then to get the benchmarks for R2.

**Table 1: Cohen’s Effect Size Benchmarks for R Squared**

R Squared |
Effect Size |

0.01 to 0.09 |
Small |

0.09 to 0.25 |
Medium |

0.25 to 1.0 |
Large |

In the example, the R squared was 0.436, which exceeds 0.25, so we conclude that the R squared shows a large effect size between low birth weights and infant mortality.

**Hypothesis Testing**

So far, we have been focusing on how to interpret regression results. But our results are derived from a ** sample. ** This means we cannot be sure that our results reflect what is going on in the population. Of course, we cannot know what we don’t know, so instead we can do hypothesis testing.

Generally, with hypothesis testing, we are focused on a “null” hypothesis. This involves a little thought experiment. We ask the following, “If there was no effect of X on Y in the population, how likely is it that we would have obtained our regression results?”

We write the null hypothesis as:

**Null Hypothesis Ho: B1pop = 0**

This is equivalent to saying that B1 in the population. Remember, we do not know what B1 is in the population, we are just testing if it is zero.

**Alternative Hypothesis H1: B1pop ≠ 0**

The alternative hypothesis is that B1 in the population does not equal zero (i.e. there is some effect of X on Y.

**Using the T Test**

To test the hypothesis above, we use a t test. The t distribution is very similar to the Z distribution (standard normal).

The formula for the t test in regression is

t =

Notice that when we do a t test, we are comparing our actual sample regression coefficient B1, __with a hypothesized value of __B1 __ for the population, __B1pop.

We could test for ANY population value using this formula. We could set the population value to 8,0000, 50 or -0.0078. The reason we set the population value to zero is that this is the only value for B1pop that would indicate NO relationship between X and Y. As a result, the standard hypothesized value for B1pop is zero. Notice what this does to the formula a above. If we substitute zero for b1pop

t = =

What is SE(B1)? This is called the standard error of B1. If we think of running an infinite number of regressions with different samples, we could plot our values of B1 on a graph. The standard error of B1 tells us how much variation there would be in this hypothetical distribution.

Now let’s look back at the table. B1 is 0.562 and the standard error of B1 is 0.09138. Plugging in the numbers gives

T== 6.15

From t to a P value

The t statistic on its own does not tell us much. What we are interested in is the p value. The p value is the probability of the t statistic. To get the p value, we must use a t distribution.

Properties of the t distribution and p values

Property 1: The t distribution is a probability distribution that measures the likelihood of different t values. Therefore, the total area of the t distribution equals 1.

Property 2: For a t test, we assume that the mean of the population t distribution is zero, which is the same as saying B1pop=0.

Property 3: A large t statistic is unlikely because as we move from the mean of the t distribution to its tails, the probability of the t values goes down.

Property 4: t tests tell us the probability that we would obtain our sample t value, if the population t value was, in fact, zero. Thus, the term hypothesis testing. This probability is called a p value. Put another way, ** the p value tells us the probability that we would be incorrect in saying B1pop ≠0. if in fact B1pop=0 **.

Property 5: A small p value gives us reason to REJECT the null hypothesis b1pop=0 because a small p value indicates that is unlikely, given our sample value for B1 that b1pop=0.

Looking back at the results the p value corresponding to the t statistic of 6.15 is 0.00. The p value is so small, it has zeroes to three digits! This means that the chances of our obtaining our sample t value of 6.15 are very, very small, if the true population t statistic were zero.

Confidence Intervals

Another way to think about hypothesis testing is using confidence intervals. Confidence intervals tell us the range of values a coefficient could take. Typically, researchers use 95% confidence intervals.

We can rearrange some of the terms from the t test to obtain confidence intervals.

CI lower = B+(SEB*t)

CI lower = B-(SEB*t)

With confidence intervals, we must specify a value for t. This value of t corresponds to whatever confidence level we want to set. Usually this is 95%.

Stata gets this value of t for us, so we do not have to look it up. Intuitively we can say that if we compared a 95% CI to a 90% CI, the former would be WIDER. This makes sense when we think about the relationship between t and probability. The larger the t value, the smaller the probability or equivalently, the higher the confidence level, the wider the CI.

In the results above, the 95% CI for the coefficient on low birth weight is 0.378 to 0.745, which is a wide margin! The Callows for us to get an idea of how much a coefficient could vary. The “official” interpretation of the 95% CI is, “95 times out of 100, the true population coefficient would be contained in this interval.”