Articles From Roberto Pedace
Filter Results
Cheat Sheet / Updated 02-09-2022
You can use the statistical tools of econometrics along with economic theory to test hypotheses of economic theories, explain economic phenomena, and derive precise quantitative estimates of the relationship between economic variables. To accurately perform these tasks, you need econometric model-building skills, quality data, and appropriate estimation strategies. And both economic and statistical assumptions are important when using econometrics to estimate models.
View Cheat SheetArticle / Updated 12-09-2021
Many economic phenomena are dichotomous in nature; in other words, the outcome either occurs or does not occur. Dichotomous outcomes are the most common type of discrete or qualitative dependent variables analyzed in economics. For example, a student who applies to graduate school will be admitted or not. If you're interested in determining which factors contribute to graduate school admission, then your outcome or dependent variable is dichotomous. As with outcomes that are quantitative, regression models can be used to explain the occurrence of these qualitative events.,They usually require special treatment, however, because they're likely to create some problems for traditional regression analysis. Econometricians have developed techniques to address those issues. The techniques can be quantitatively burdensome and practically impossible without a computer, but understanding the structure of the models is essential if you want to accurately interpret the computer output. Like qualitative variables, when quantitative variables with restricted values are used as dependent variables, they also generate unique circumstances for regression analysis. The problem, as is the case with qualitative variables, is that the limited (censored or truncated) values cause the distributional assumptions of the classical linear regression model to fail. Suppose you're interested in estimating the effect of price changes on the demand for U2 concert tickets. One problem is that the measurement of this demand is limited by the capacity of the arenas booked for their tour. After a concert sells out, you may not be able to observe by how much the demand exceeded the arena limit. You may only observe limited values for many quantitative variables (for example, various types of percentages, SAT scores, grade point averages, and so on). Fortunately, econometricians have developed techniques to handle restricted/limited dependent variables that are similar to those used for qualitative dependent variables. The following list contains special dependent variable situations and the names of the techniques econometricians have developed to handle them: Dichotomous or binary response dependent variable: A discrete variable with two outcomes, usually 0 or 1. Handled with Probit/Logit models. Censored dependent variable: A continuous variable where some of the actual values have been limited to some predetermined minimum or maximum value. Handled with the Tobit (censored normal) model. Truncated dependent variable: A continuous variable where some of the actual values aren't observed if they are less than some predetermined minimum value or more than some predetermined maximum value. Handled with the truncated normal model. Self-selected sample: Missing values for the dependent variable due to nonrandom participation decisions from population of interest. Handled with the Heckman selection model. Polychotomous or a multiple response dependent variable: A discrete variable with more than two outcomes. Handled with a multinomial Probit/Logit model or ordered Probit/Logit model (covered in more advanced econometrics courses). Discrete dependent variable: A nonnegative, discrete count variable that assumes integer values (0, 1, 2,…). Handled with a Poisson model or negative binomial model (covered in more advanced econometrics courses).
View ArticleArticle / Updated 02-22-2017
In econometrics, a random variable with a normal distribution has a probability density function that is continuous, symmetrical, and bell-shaped. Although many random variables can have a bell-shaped distribution, the density function of a normal distribution is precisely where represents the mean of the normally distributed random variable X, is the standard deviation, and represents the variance of the normally distributed random variable. A shorthand way of indicating that a random variable, X, has a normal distribution is to write A distinctive feature of a normal distribution is the probability (or density) associated with specific segments of the distribution. The normal distribution in the figure is divided into the most common intervals (or segments): one, two, and three standard deviations from the mean. With a normally distributed random variable, approximately 68 percent of the measurements are within one standard deviation of the mean, 95 percent are within two standard deviations, and 99.7 percent are within three standard deviations. Suppose you have data for the entire population of individuals living in retirement homes. You discover that the average age of these individuals is 70, the variance is 9 and the distribution of their age is normal. Using shorthand, you could simply write this information as If you randomly select one person from this population, what are the chances that he or she is more than 76 years of age? Using the density from a normal distribution, you know that approximately 95 percent of the measurements are between 64 and 76 (notice that 6 is equal to two standard deviations). The remaining 5 percent are individuals who are less than 64 years of age or more than 76. Because a normal distribution is symmetrical, you can conclude that you have about a 2.5 percent (5% / 2 = 2.5%) chance that you randomly select somebody who is more than 76 years of age. If a random variable is a linear combination of another normally distributed random variable(s), it also has a normal distribution. Suppose you have two random variables described by these terms: In other words, random variable X has a normal distribution with a mean of and variance of and random variable Y has a normal distribution with a mean of and a variance of If you create a new random variable, W, as the following linear combination of X and Y, W = aX + bY, then W also has a normal distribution. Additionally, using expected value and variance properties, you can describe the new random variable with this shorthand notation:
View ArticleArticle / Updated 01-25-2017
In econometrics, you use the chi-squared distribution extensively. The chi-squared distribution is useful for comparing estimated variance values from a sample to those values based on theoretical assumptions. Therefore, it’s typically used to develop confidence intervals and hypothesis tests for population variance. First, however, you should familiarize yourself with the characteristics of a chi-squared distribution. The chi-squared distribution is a squared standard normal random variable, so it takes only nonnegative values and tends to be right-skewed. The extent of its skewness depends on the degrees of freedom or number of observations. The higher the degrees of freedom (more observations), the less skewed (more symmetrical) the chi-squared distribution. The figure shows a few chi-squared distributions, where df1, df2, and df3 indicate increasing degrees of freedom. The chi-squared distribution is typically used with variance estimates and rests on the idea that you begin with a normally distributed random variable, such as With sample data, you estimate the variance of this random variable with If you algebraically manipulate this formula, you arrive at the chi-squared distribution: The last step, in which you divide both sides by the known (or assumed) population variance, is what standardizes your sample variance to a common scale known as chi-squared.
View ArticleArticle / Updated 03-26-2016
In econometrics, the procedure used for forecasting can be quite varied. If historical data is available, forecasting typically involves the use of one or more quantitative techniques. If historical data isn't available, or if it contains significant gaps or is unreliable, then forecasting can actually be qualitative. Quantitative approaches to forecasting in econometrics involve the use of causal and/or smoothing models, whereas qualitative forecasting uses expert consensus and/or scenario analysis. The image shows the traditional classification of nine different forecasting methods. What you see here isn't exhaustive, but it does include the most commonly used methods. There's no perfect method for forecasting in a given situation. The best method to choose depends on the information available. However, if more than one method can be applied, forecasters typically use all the methods that are possible and rely on the one with the greatest forecast accuracy. Always evaluate your forecast's accuracy to determine which forecasting technique is best in your specific application.
View ArticleArticle / Updated 03-26-2016
In econometrics, the regression model is a common starting point of an analysis. As you define your regression model, you need to consider several elements: Economic theory, intuition, and common sense should all motivate your regression model. The most common regression estimation technique, ordinary least squares (OLS), obtains the best estimates of your model if the CLRM assumptions hold. Assuming a normal distribution of the error term is important for hypothesis testing and prediction/forecasting. When a regression model is estimated, applied econometricians and readers of the research assume that the researcher chose the correct independent variables, meaning they're truly likely to cause changes in the dependent variable (the outcome of interest). The data and estimation of your model will ultimately reveal which independent variables are important factors and which ones are not. However, prior to obtaining results, you need to provide a sound justification for the variables you've chosen. After you've specified the model and acquired your data, regression analysis allows you to estimate the economic relationships you've defined in the model. The estimation results provide a quantitative approximation of the relationship between the independent and dependent variables. OLS is the most common technique used for these calculations. Typically, you rely on specialized software to produce your estimates. However, by initially using manual calculations in situations with only one independent variable and relatively few observations, you can gain familiarity with the OLS technique and obtain a better understanding of the software's algorithms and output. No regression model is perfect. The error term contains the influence of any factors (variables) that affect your dependent variable and aren't captured by your independent variable(s). The characteristics of the error term are of critical importance in econometrics. You need several assumptions about the error term to prove that the OLS results are precise. The assumption that the error term is normally distributed isn't required for performing OLS estimation, but it is necessary when you want to produce confidence intervals and/or perform hypothesis tests with your OLS estimates.
View ArticleArticle / Updated 03-26-2016
Economists apply econometric tools in a variety of specific fields (such as labor economics, development economics, health economics, and finance) to shed light on theoretical questions. They also use these tools to inform public policy debates, make business decisions, and forecast future events. Following is a list of ten interesting, practical applications of econometric techniques. Forecasting macroeconomic indicators: Some macroeconomists are concerned with the expected effects of monetary and fiscal policy on the aggregate performance of the economy. Time-series models can be used to make predictions about these economic indicators. Estimating the impact of immigration on native workers: Immigration increases the supply of workers, so standard economic theory predicts that equilibrium wages will decrease for all workers. However, since immigration can also have positive demand effects, econometric estimates are necessary to determine the net impact of immigration in the labor market. Identifying the factors that affect a firm's entry and exit into a market: The microeconomic field of industrial organization, among many issues of interest, is concerned with firm concentration and market power. Theory suggests that many factors, including existing profit levels, fixed costs associated with entry/exit, and government regulations can influence market structure. Econometric estimation helps determine which factors are the most important for firm entry and exit. Determining the influence of minimum-wage laws on employment levels: The minimum wage is an example of a price floor, so higher minimum wages are supposed to create a surplus of labor (higher levels of unemployment). However, the impact of price floors like the minimum wage depends on the shapes of the demand and supply curves. Therefore, labor economists use econometric techniques to estimate the actual effect of such policies. Finding the relationship between management techniques and worker productivity: The use of high-performance work practices (such as worker autonomy, flexible work schedules, and other policies designed to keep workers happy) has become more popular among managers. At some point, however, the cost of implementing these policies can exceed the productivity benefits. Econometric models can be used to determine which policies lead to the highest returns and improve managerial efficiency. Measuring the association between insurance coverage and individual health outcomes: One of the arguments for increasing the availability (and affordability) of medical insurance coverage is that it should improve health outcomes and reduce overall medical expenditures. Health economists may use econometric models with aggregate data (from countries) on medical coverage rates and health outcomes or use individual-level data with qualitative measures of insurance coverage and health status. Deriving the effect of dividend announcements on stock market prices and investor behavior: Dividends represent the distribution of company profits to its shareholders. Sometimes the announcement of a dividend payment can be viewed as good news when shareholders seek investment income, but sometimes they can be viewed as bad news when shareholders prefer reinvestment of firm profits through retained earnings. The net effect of dividend announcements can be estimated using econometric models and data of investor behavior. Predicting revenue increases in response to a marketing campaign: The field of marketing has become increasingly dependent on empirical methods. A marketing or sales manager may want to determine the relationship between marketing efforts and sales. How much additional revenue is generated from an additional dollar spent on advertising? Which type of advertising (radio, TV, newspaper, and so on) yields the largest impact on sales? These types of questions can be addressed with econometric techniques. Calculating the impact of a firm's tax credits on R&D expenditure: Tax credits for research and development (R&D) are designed to provide an incentive for firms to engage in activities related to product innovation and quality improvement. Econometric estimates can be used to determine how changes in the tax credits influence R&D expenditure and how distributional effects may produce tax-credit effects that vary by firm size. Estimating the impact of cap-and-trade policies on pollution levels: Environmental economists have discovered that combining legal limits on emissions with the creation of a market that allows firms to purchase the "right to pollute" can reduce overall pollution levels. Econometric models can be used to determine the most efficient combination of state regulations, pollution permits, and taxes to improve environmental conditions and minimize the impact on firms.
View ArticleArticle / Updated 03-26-2016
In econometrics, the standard estimation procedure for the classical linear regression model, ordinary least squares (OLS), can accommodate complex relationships. Therefore, you have a considerable amount of flexibility in developing the theoretical model. You can estimate linear and nonlinear functions including but not limited to Polynomial functions (for example, quadratic and cubic functions) Inverse functions Log functions (log-log, log-linear, and linear-log) In many cases, the dependent variable in a regression model can be influenced by both quantitative variables and qualitative factors. Aside from keeping track of the units of measurement or converting to a log scale, the use of quantitative variables in regression analysis is usually straightforward. Qualitative variables, however, require conversion to a quantitative scale using dummy variables, which equal 1 when a particular characteristic is present and 0 otherwise. (Note that when more than two qualitative outcomes are possible, the number of dummy variables you need is the number of outcomes minus one.) Utilizing both quantitative and qualitative variables generally results in richer models with more informative results. Although some experimentation with the exact form of your regression model can be enlightening, take the time to think through specification issues methodically. Be sure you can explain why you've chosen specific independent variables for your model. You should also be able to justify the functional form you've chosen for the model even if you've assumed a simple linear relationship between your variables. Test the assumptions of the classical linear regression model (CLRM) and make changes to the model as necessary. Finally, spend some time examining the sensitivity of your results by making slight modifications to the variables (sometimes influenced by the outcomes of your CLRM tests) included in the model and the functional form of the relationship. If your results are stable to these types of variations, that provides additional justification for your conclusions.
View ArticleArticle / Updated 03-26-2016
If the classical linear regression model (CLRM) doesn't work for your data because one of its assumptions doesn't hold, then you have to address the problem before you can finalize your analysis. Fortunately, one of the primary contributions of econometrics is the development of techniques to address such problems or other complications with the data that make standard model estimation difficult or unreliable. The following table lists the names of the most common estimation issues, a brief definition of each one, their consequences, typical tools used to detect them, and commonly accepted methods for resolving each problem. Problem Definition Consequences Detection Solution High multicollinearity Two or more independent variables in a regression model exhibit a close linear relationship. Large standard errors and insignificant t-statistics Coefficient estimates sensitive to minor changes in model specification Nonsensical coefficient signs and magnitudes Pairwise correlation coefficients Variance inflation factor (VIF) 1. Collect additional data. 2. Re-specify the model. 3. Drop redundant variables. Heteroskedasticity The variance of the error term changes in response to a change in the value of the independent variables. Inefficient coefficient estimates Biased standard errors Unreliable hypothesis tests Park test Goldfeld-Quandt test Breusch-Pagan test White test 1. Weighted least squares (WLS) 2. Robust standard errors Autocorrelation An identifiable relationship (positive or negative) exists between the values of the error in one period and the values of the error in another period. Inefficient coefficient estimates Biased standard errors Unreliable hypothesis tests Geary or runs test Durbin-Watson test Breusch-Godfrey test 1. Cochrane-Orcutt transformation 2. Prais-Winsten transformation 3. Newey-West robust standard errors
View ArticleArticle / Updated 03-26-2016
Econometric techniques are used to estimate economic models, which ultimately allow you to explain how various factors affect some outcome of interest or to forecast future events. The ordinary least squares (OLS) technique is the most popular method of performing regression analysis and estimating econometric models, because in standard situations (meaning the model satisfies a series of statistical assumptions) it produces optimal (the best possible) results. The proof that OLS generates the best results is known as the Gauss-Markov theorem, but the proof requires several assumptions. These assumptions, known as the classical linear regression model (CLRM) assumptions, are the following: The model parameters are linear, meaning the regression coefficients don't enter the function being estimated as exponents (although the variables can have exponents). The values for the independent variables are derived from a random sample of the population, and they contain variability. The explanatory variables don't have perfect collinearity (that is, no independent variable can be expressed as a linear function of any other independent variables). The error term has zero conditional mean, meaning that the average error is zero at any specific value of the independent variable(s). The model has no heteroskedasticity (meaning the variance of the error is the same regardless of the independent variable's value). The model has no autocorrelation (the error term doesn't exhibit a systematic relationship over time). If one (or more) of the CLRM assumptions isn't met (which econometricians call failing), then OLS may not be the best estimation technique. Fortunately, econometric tools allow you to modify the OLS technique or use a completely different estimation method if the CLRM assumptions don't hold.
View Article