Econometrics For Dummies

Econometrics For Dummies

Now pay Easier and Secure using Paypal

Read more

by Roberto Pedace, PhD

Publisher’s Acknowledgments

Acquisitions, Editorial, and Vertical Websites
Project Editor: Jennifer Tebbe
Acquisitions Editor: Erin Calligan Mooney
Copy Editor: Caitlin Copple
Assistant Editor: David Lutton
Editorial Program Coordinator: Joe Niesen
Technical Editors: Ariel Belasen, Nicole Bissessar
Editorial Manager: Christine Meloy Beck
Editorial Assistants: Rachelle S. Amick, Alexa Koschier
Cover Photo: © iStockphoto.com/studiocasper
Composition Services
Project Coordinator: Sheree Montgomery
Layout and Graphics: Carrie A. Cesavice, Christin Swinford
Proofreader: Melissa Cossell
Indexer: Riverside Indexes, Inc.
Publishing and Editorial for Consumer Dummies
Kathleen Nebenhaus, Vice President and Executive Publisher
David Palmer, Associate Publisher
Kristin Ferguson-Wagstaffe, Product Development Director
Publishing for Technology Dummies
Andy Cummings, Vice President and Publisher
Composition Services
Debbie Stailey, Director of Composition Services

e-books shop
e-books shop
Purchase Now !
Just with Paypal

Book Details
 2.00 USD
 564 p
 File Size
 13,180 KB
 File Type
 PDF format
 978-1-118-53384-0 (pbk)
 978-1-118-53388-8 (ebk)
 978-1-118-53391-8 (ebk)

 2013 by John Wiley & Sons 

About the Author
Roberto Pedace is an associate professor of economics at
Scripps College in Claremont, California. Prior to joining the
faculty at Scripps College, he held positions at Claremont
Graduate University, the University of Redlands, Claremont
McKenna College, and the U.S. Census Bureau. He holds a
PhD in economics from the University of California, Riverside.

Roberto regularly teaches courses in the areas of statistics,
microeconomics, labor economics, and econometrics. While
at the University of Redlands, he was nominated for both the
Innovative Teaching Award and the Outstanding Teaching
Award. At Scripps College, he was recognized for his
scholarly achievements by winning the Mary W. Johnson
Faculty Achievement Award in Scholarship.

Roberto’s academic research interests are in the area of labor
and personnel economics. His work addresses a variety of
important public policy issues, including the effects of
immigration on domestic labor markets and the impact of
minimum wages on job training and unemployment. He also
examines salary determination and personnel decisions in
markets for professional athletes. His published work appears
in the Southern Economic Journal, the Journal of Sports
Economics, Contemporary Economic Policy, Industrial
Relations, and other outlets.

Roberto is also a soccer fanatic. He’s been playing soccer
since the age of 5, paid for most of his undergraduate
education with a soccer scholarship, and had a short
semi-professional stint in the USISL (now known as the
United Soccer League). He continues to participate in leagues
and tournaments but now mostly enjoys sitting on the
sidelines watching his children play soccer.

My appreciation for econometrics grew out of my interest in
trying to figure out how the world works. I discovered that
empirical techniques tailored to specific circumstances could
help explain all sorts of economic outcomes. As I came to
understand how the theoretical structure of economics
combines with information contained in real-world data, I
began to see observed phenomena in a different light. I’d
often ask myself questions about my observations. Could I
determine whether the outcomes were random and simply
appeared to be related? If I believed that two or more things I
observed had a logical connection, could I use data to test my
assertions? Increasingly, I found myself relying on the tools
of econometrics to answer these types of questions.

I’ve written Econometrics For Dummies to help you get the
most out of your economics education. By now, your classes
have taught you some economic theory, but you’re craving
more precision in the predicted outcomes of those theories.
Perhaps you’re even questioning whether the theories are
consistent with what you observe in the real world. I find that
one of the most attractive characteristics of properly applied
econometrics is that it’s “school of thought neutral.” In other
words, you can adapt an econometric approach to a variety of
initial assumptions and check the results for consistency. By
using econometrics carefully and conscientiously, you can get
the data to speak. But you better learn the language if you
hope to understand what it’s saying!

Table of Contents
About This Book
Foolish Assumptions
Icons Used in This Book
Beyond the Book
Where to Go from Here
Part I: Getting Started with Econometrics
Chapter 1: Econometrics: The Economist’s Approach to
Statistical Analysis
Evaluating Economic Relationships
Using economic theory to describe outcomes and make
Relying on sensible assumptions
Applying Statistical Methods to Economic Problems
Recognizing the importance of data type, frequency, and
Avoiding the data-mining trap
Incorporating quantitative and qualitative information
Using Econometric Software: An Introduction to STATA
Getting acquainted with STATA
Creating new variables
Estimating, testing, and predicting
Chapter 2: Getting the Hang of Probability
Reviewing Random Variables and Probability Distributions
Looking at all possibilities: Probability density function
Summing up the probabilities: Cumulative density function
Putting variable information together: Bivariate or joint
probability density
Predicting the future using what you know: Conditional
probability density
Understanding Summary Characteristics of Random
Making generalizations with expected value or mean
Measuring variance and standard deviation
Looking at relationships with covariance and correlation
Chapter 3: Making Inferences and Testing Hypotheses
Getting to Know Your Data with Descriptive Statistics
Calculating parameters and estimators
Determining whether an estimator is good
Laying the Groundwork of Prediction with the Normal and
Standard Normal Distributions
Recognizing usual variables: Normal distribution
Putting variables on the same scale: Standard normal
distribution (Z)
Working with Parts of the Population: Sampling Distributions
Simulating and using the central limit theorem
Defining the chi-squared (χ2), t, and F distributions
Making Inferences and Testing Hypotheses with Probability
Performing a hypothesis test
The confidence interval approach
The test of significance approach
Part II: Building the Classical Linear Regression Model
Chapter 4: Understanding the Objectives of Regression
Making a Case for Causality
Getting Acquainted with the Population Regression Function
Setting up the PRF model
Walking through an example
Collecting and Organizing Data for Regression Analysis
Taking a snapshot: Cross-sectional data
Looking at the past to explain the present: Time-series data
Combining the dimensions of space and time: Panel or
longitudinal data
Joining multiple snapshots: Pooled cross-sectional data
Chapter 5: Going Beyond Ordinary with the Ordinary Least
Squares Technique
Defining and Justifying the Least Squares Principle
Estimating the Regression Function and the Residuals
Obtaining Estimates of the Regression Parameters
Finding the formulas necessary to produce optimal coefficient values
Calculating the estimated regression coefficients
Interpreting Regression Coefficients
Seeing what regression coefficients have to say
Standardizing regression coefficients
Measuring Goodness of Fit
Decomposing variance
Measuring proportion of variance with R2
Adjusting the goodness of fit in multiple regression
Evaluating fit versus quality
Chapter 6: Assumptions of OLS Estimation and the
Gauss-Markov Theorem
Characterizing the OLS Assumptions
Linearity in parameters and additive error
Random sampling and variability
Imperfect linear relationships among the independent
Error term has a zero conditional mean; correct specification
Error term has a constant variance
Correlation of error observations is zero
Relying on the CLRM Assumptions: The Gauss-Markov
Proving the Gauss-Markov theorem
Summarizing the Gauss-Markov theorem
Chapter 7: The Normality Assumption and Inference with
Describing the Role of the Normality Assumption
The error term and the sampling distribution of OLS
Revisiting the standard normal distribution
Deriving a chi-squared distribution from the random error
OLS standard errors and the t-distribution
Testing the Significance of Individual Regression
Picking an approach
Choosing the level of significance and p-values
Analyzing Variance to Determine Overall or Joint
Normality, variance, and the F distribution
The reported F-statistic from OLS
Slope coefficients and the relationship between t and F
Joint significance for subsets of variables
Applying Forecast Error to OLS Predictions
Mean prediction and forecast error
Variance of mean prediction
All predictions are not the same: The prediction confidence
Part III: Working with the Classical Regression Model
Chapter 8: Functional Form, Specification, and Structural
Employing Alternative Functions
Quadratic function: Best for finding minimums and
Cubic functions: Good for inflexion
Inverse function: Limiting the value of the dependent variable
Giving Linearity to Nonlinear Models
Working both sides to keep elasticity constant: The log-log
Making investments and calculating rates of return: The
log-linear model
Decreasing the change of the dependent variable: The
linear-log model
Checking for Misspecification
Too many or too few: Selecting independent variables
Sensitivity isn’t a virtue: Examining misspecification with
results stability
Chapter 9: Regression with Dummy Explanatory Variables
Numbers Please! Quantifying Qualitative Information
Defining a dummy variable when you have only two possible
Juggling multiple characteristics with dummy variables
Finding Average Differences by Using a Dummy Variable
Combining Quantitative and Qualitative Data in the
Regression Model
Interacting Quantitative and Qualitative Variables
Interacting Two (or More) Qualitative Characteristics
Segregate and Integrate: Testing for Significance
Revisiting the F-test for joint significance
Revisiting the Chow test
Part IV: Violations of Classical Regression Model
Chapter 10: Multicollinearity
Distinguishing between the Types of Multicollinearity
Pinpointing perfect multicollinearity
Zeroing in on high multicollinearity
Rules of Thumb for Identifying Multicollinearity
Pairwise correlation coefficients
Auxiliary regression and the variance inflation factor (VIF)
Knowing When and How to Resolve Multicollinearity Issues
Get more data
Use a new model
Expel the problem variable(s)
Chapter 11: Heteroskedasticity
Distinguishing between Homoskedastic and Heteroskedastic
Homoskedastic error versus heteroskedastic error
The consequences of heteroskedasticity
Detecting Heteroskedasticity with Residual Analysis
Examining the residuals in graph form
Brushing up on the Breusch-Pagan test
Getting acquainted with the White test
Trying out the Goldfeld-Quandt test
Conducting the Park test
Correcting Your Regression Model for the Presence of
Weighted least squares (WLS)
Robust standard errors (also known as White-corrected
standard errors)
Chapter 12: Autocorrelation
Examining Patterns of Autocorrelation
Positive versus negative autocorrelation
Misspecification and autocorrelation
Illustrating the Effect of Autoregressive Errors
Analyzing Residuals to Test for Autocorrelation
Taking the visual route: Graphical inspection of residuals
Using the normal distribution to identify residual sequences:
The run test
Detecting autocorrelation of an AR(1) process: The
Durbin-Watson test
Detecting autocorrelation of an AR(q) process: The
Breusch-Godfrey test
Remedying Harmful Autocorrelation
Feasible generalized least squares (FGLS)
Serial correlation robust standard errors
Part V: Discrete and Restricted Dependent Variables in
Chapter 13: Qualitative Dependent Variables
Modeling Discrete Outcomes with the Linear Probability
Model (LPM)
Estimating LPM with OLS
Interpreting your results
Presenting the Three Main LPM Problems
Non-normality of the error term
Unbounded predicted probabilities
Specifying Appropriate Nonlinear Functions: The Probit and
Logit Models
Working from the standard normal CDF: The probit model
Basing off of the logistic CDF: The logit model
Using Maximum Likelihood (ML) Estimation
Constructing the likelihood function
The log transformation and ML estimates
Interpreting Probit and Logit Estimates
Probit coefficients
Logit coefficients
Chapter 14: Limited Dependent Variable Models
The Nitty-Gritty of Limited Dependent Variables
Censored dependent variables
Truncated dependent variables
Modifying Regression Analysis for Limited Dependent
Tobin’s Tobit
Truncated regression
Oh, what the heck if I self select? Heckman’s selection bias
Part VI: Extending the Basic Econometric Model
Chapter 15: Static and Dynamic Models
Using Contemporaneous and Lagged Variables in Regression
Examining problems with dynamic models
Testing and correcting for autocorrelation in dynamic models
Projecting Time Trends with OLS
Spurious correlation and time series
Detrending time-series data
Using OLS for Seasonal Adjustments
Estimating seasonality effects
Deseasonalizing time-series data
Chapter 16: Diving into Pooled Cross-Section Analysis
Adding a Dynamic Time Element to the Mix
Examining intercepts and/or slopes that change over time
Incorporating time dummy variables
Using Experiments to Estimate Policy Effects with Pooled
Cross Sections
Benefitting from random assignment: A true experiment
Working with predetermined subject groups: A natural (or
quasi) experiment
Chapter 17: Panel Econometrics
Estimating the Uniqueness of Each Individual Unit
First difference (FD) transformation
Dummy variable (DV) regression
Fixed effects (FE) estimator
Increasing the Efficiency of Estimation with Random Effects
The composite error term and assumptions of random effects model
The random effects (RE) estimator
Testing Efficiency against Consistency with the Hausman Test
Part VII: The Part of Tens
Chapter 18: Ten Components of a Good Econometrics
Research Project
Introducing Your Topic and Posing the Primary Question of
Discussing the Relevance and Importance of Your Topic
Reviewing the Existing Literature
Describing the Conceptual or Theoretical Framework
Explaining Your Econometric Model
Discussing the Estimation Method(s)
Providing a Detailed Description of Your Data
Constructing Tables and Graphs to Display Your Results
Interpreting the Reported Results
Summarizing What You Learned
Chapter 19: Ten Common Mistakes in Applied Econometrics
Failing to Use Your Common Sense and Knowledge of
Economic Theory
Asking the Wrong Questions First
Ignoring the Work and Contributions of Others
Failing to Familiarize Yourself with the Data
Making It Too Complicated
Being Inflexible to Real-World Complications
Looking the Other Way When You See Bizarre Results
Obsessing over Measures of Fit and Statistical Significance
Forgetting about Economic Significance
Assuming Your Results Are Robust
Appendix: Statistical Tables
Cheat Sheet

e-books shop

About This Book
Econometrics For Dummies provides you with a short and
simple version of a first-semester course in econometrics. I
don’t cite the seminal work or anything from the large
collection of econometric theory papers published in
scholarly journals. The organization of topics may have some
resemblance to traditional econometrics textbooks, but my
goal is to present the material in a more straightforward
manner. Even if you’re taking a second-semester (advanced)
econometrics course or a graduate course, you may find this
book to be a useful, one-stop, nuts-and-bolts resource.
Of course, some technical sophistication is essential in
econometrics. Besides, you’ve taken introductory economics,
statistics, and maybe even intermediate economic theory, so
now you’re ready to show off your technical prowess. But
wait a minute! Sometimes, with all the technical skills being
mastered in learning econometrics, students fail to appreciate
the insights from the simplicity. In fact, you may even forget
why you’re approaching a problem with a particular
technique. That’s where this book can help.

Please note that I have tried to remain consistent with my
terminology throughout the book, but econometricians
sometimes have several different words for the same thing.
Also, note that I use the statistical software STATA 12.1
throughout, but sometimes I refer to it simply as econometrics
software or just STATA.