Kerosene Essay

     The United States’ populace has experienced considerable changes in personal energy consumption over the last several decades.1  Specifically, consumer demand for energy sources such as kerosene – a distilled form of petroleum used in jet engines and for cleaning, cooking, heating – has declined considerably.2  This paper uses time-series, “an ordered sequence of values of a variable at equally spaced time intervals,” to examine and measure the effect of several explanatory variables on kerosene usage.3  The model analyzes quarterly data from 1959 to 1990, and estimates changes in gallons of kerosene purchased based on changes in the price of kerosene, real disposable income, miles per gallon of fuel consumption, and the population over sixteen years of age.  Results are presented via the following:

1.) model overview and analysis of dependent and explanatory variables’ coefficients,

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

2.) overall model strength analysis through the R-square, Adjusted R-square, and “F” statistics,

3.) significance testing of individual variables, and

4.) analysis of the Durbin-Watson and Diagnostic Tests’ statistics for sources of error and confirmation of model strength/”goodness of fit,” and

5.) microeconomic implications.

Model Overview and Analysis of Coefficients

     Again, this is a time-series model that examines quarterly data during a 32-year period.  There are, thus, 128 observations of the relationship between the dependent variable, “QTY,” the quantity of kerosene purchased (billions of gallons) and the following independent variables: “P,” the price of kerosene (dollars per gallon); “Y,” real disposal income (billions of dollars in 1986 prices); “MG,”         log of average fuel consumption (miles per gallon); “POP,” civilian, non-institutional population, aged 16 years and above (thousands). The computed regression line/model is as follows:

lnQTY =  – 5.9056C – 1.1834lnP + .73849lnY – .58189lnMG +.97366POP

The model combines two functional forms, which refer to the slope, or “the algebraic form of a relationship between a dependent variable and regressors or explanatory variables.”4 One is double-log or logarithmic (for explanatory variables “P,” “Y” and “MG”).  In this functional form, both the dependent and independent variables are in natural logs, and the slope is the elasticity coefficient.5  With the double-log functional form, a one percent (1%) change in the independent variable will cause a proportionate (percent) change in the dependent variable.6  This form measures the elasticity of market demand for kerosene (dependent variable “QTY,” a log of the quantity of kerosene purchased in billions of gallons) in response to changes in the logs of price, income and miles-per-gallon.  In terms of elasticity, a market or good can be described as elastic or inelastic as a means of describing its responsiveness to the proportionate change in another quantity.7

The model’s other functional form is semi-log (for explanatory variable “POP”).  In this functional form, the slope is interpreted as follows: a one unit change in the independent variable (“POP”) will cause a percent (%) change in the dependent variable (“QTY”).8

 The coefficients of the variables provide an interesting look at changes in kerosene consumption over time.  The “C” variable, measures slope of the dependent variable and excludes the effect of the explanatory variables.  Between 1959 and 1990, on average, there was a -5.9056% decrease per quarter in the amount of kerosene purchased for private consumption, excluding the independent variables in the model.  This indicates that, over time, people have been using alternative sources of energy (i.e., electricity) for cooking and heating, etc.  In other words, during the 32-year period under study, personal consumption of kerosene has declined (on average).

The “P” variable measures “own-price elasticity.”9 For every 1% increase in the price of kerosene (per gallon), the gallons of kerosene purchased decreases by 1.18%.  Kerosene’s cost and its quantity purchased have an inverse relationship.  When the price of kerosene increases, people purchase less of it.  When the price of kerosene declines, people purchase more of it.  At 1.18, the variable’s own-price elasticity > 1, indicating it is elastic, or sensitive to changes in price (in other words, the price of kerosene affects how much people purchase kerosene).10

The “Y” variable measures “income elasticity.”11  For every 1% increase in real disposable income, the gallons of kerosene purchased increases by .73849%.  Income and quantity of kerosene purchased have a positive relationship.  When people earn more money, they buy more kerosene (because they can afford to do so).  At .73849, income elasticity < 1, indicating that kerosene is a necessity good, or one that people use to take care of basic needs (like food preparation and heating).12  Consumption of kerosene will increase with increases in income, but not at the same rate that income increases.

The “MG” variable is the log of miles per gallon used.  For every 1% increase in average fuel consumption (miles per gallon), the gallons of kerosene purchased decreases by .58189%.  Fuel consumption (for vehicle use) and quantity of kerosene purchased have an inverse relationship.  This suggests that people are out driving (using more fuel/gasoline), and they are at home less.  Thus, they use kerosene for heating and cooking, and not for personal transportation (cars, buses are not fueled by kerosene).

The “POP” variable is the number (in thousands) of people over aged sixteen that is not incarcerated or in the military.  For every increase in the civilian, non-institutional population, aged 16 years and above population by 1,000 people, the gallons of kerosene purchased increase by .97366%.  Population and quantity of kerosene purchased have a positive relationship.  More people means there is greater personal consumption/use of kerosene.

Overall Model Analysis

A model’s ability to predict or estimate, also called its strength or “goodness of fit,” is measured by a series of statistics.  The “goodness of fit” indicates the difference between the actual observations in a data set and those predicted by the model.13  One of the most commonly used is R-squared.   R-squared is the proportional (percent) variation in a data set explained by a model.14  At the most, R-square equals one, indicating a model explains 100% of the variation/changes in a dependent variable.15  R-square is calculated with following formula:

R2 = ESS/TSS

This is the percentage of total variation in the dependent variable explained jointly by the variation in independent variables.16

In this model, R-square equals 99.359% of the changes in the log of kerosene consumption are explained by the independent variables listed.  This is very high (very close to 100%), indicating a strong correlation between the explanatory variables and the dependent variable.  R-square does not, however, imply causation.  In other words, we cannot say (despite the results of the model) that the explanatory variables caused the variation in the quantity of kerosene purchased.

Another variable that measure a model’s strength is Adjusted R-square.  The Adjusted R-square is generally smaller than R-square because it takes into account the number of explanatory variables in any model (Adjusted R-square can only be as large as, or smaller than, R-square).17  With R-square, its formula’s denominator is fixed (unchanging) and the numerator can ONLY increase.18 Therefore, each additional variable used in the equation will, at least, not decrease the numerator and will probably increase the numerator at least slightly, resulting in a higher R-square, even when the new variable causes the equation to become less efficient(worse).  In theory, using an infinite number of independent variables to explain the change in a dependent variable could result in an R-square of “1,” which would be a perfect model (and highly unlikely).  In other words, the R-square value can be manipulated and should be suspect.  The adjusted R-square value is an attempt to correct this short-coming by adjusting both the numerator and the denominator by their respective degrees of freedom.

In this model, there is minimal difference between the adjusted R-square (0.99338) and R-square (0.99359), which generally indicates that no explanatory variable(s) are missing. This also suggests this is a very strong model that explains most of the proportional change in kerosene purchases (consumption) during the time period under study.

A final statistic that measures a model’s strength is the “F” statistic.  The “F” test is the “overall significance test.”19  The “F” statistic answers the following question, “Do the independent variables overall reliably predict the dependent variable?”  It is calculated wit the following formula:

“F” = MS Model/MS Residual20

We use the “F” test to determine if our hypothesis regarding the relationships between the dependent and independent variables is correct.  A standard hypothesis for this test is as follows:

Null Hypothesis or H0: ? = zero

Researcher’s Hypothesis or H1: ? ? zero

“?” refers to the coefficients.  If the value of the coefficient is zero, then no relationship exists.   At a 5% level of significance, we want to determine if the coefficients are statistically significant.  The significance level or alpha refers to the probability that the researcher will make a mistake with hypothesis testing.21  We must also calculate the degrees of freedom.   The degrees of freedom (df) are as follows: (4, 123).  The first “df” is called “n1” or “numerator 1.”  Its value is “4,” which is computed as follows: n1 = k-1, with “k” being the number of variables in the model, including the dependent variable.  So, “n1” is computed as follows: 5-1 = 4.  The second “df” is called “n2” or “numerator 2.”  Its value is “123,” which is computed as follows: n2 = n-k, with “n” being the number of observations in the model, and “k” being the number of variables in the model, including the dependent variable.  Once we determine the degrees of freedom, we refer to the “F” table to obtain the “critical value” for “F.”  The critical value is 2.456.  The computed “F” is 4764.7, far greater that the critical value for “df” (4, 123).  So, the computed “F” falls in the critical or rejection region.  This result means we reject, with 95% confidence, the null hypothesis.  The coefficients are statistically different from zero, and, overall, proportional changes in the independent variables reliably predict proportional changes in personal kerosene purchases.

Significance Testing of Individual Variables

Performing a Test of Significance is another method of examining the results of the model.22

To test the statistical significance of a specific variable (in this instance, “P,” or the log of the price of kerosene), we develop the following hypothesis:23

Null Hypothesis or H0: ?p = -1

Researcher’s Hypothesis or H1: ?p ? -1

We then perform the “Test of Significance:”24

^

^

t = ?p – ?p /se (?p) ~ “t”, with (n-k) df

n= # observations

k = # of variables in the model

-1.1834-(-1) / 0.013825 =

-1.1834+1 /0.013825 =

-.1834/0.013825 =

-13.2658

The computed “t” value equals -13.2658, and must be compared with the critical value of “t” found on the “t” table.  This is a two-sided, or two-tailed test.  The critical values for “t” at 123 “df” are -1.9840 and 1.9840.  The computed “t” of -13.2658 lies in the rejection region.  These results indicate that the coefficient for the log of the price of kerosene is statistically different from negative one at 5% level of significance.  The economic interpretation is that proportional change in the price of kerosene reliably predicts proportional change in the gallons of kerosene purchased (personal kerosene consumption).  In other words, price reliably predicts consumption.

Testing for Errors

Based on the values of statistics such as R-square and Adjusted R-square and on the results of the “F” Test and “Test of Significance,” the researcher concludes that the model provides a good estimation of the relationships between kerosene consumption and the independent variables.  There are, however, a series of problems that can occur with econometric modeling.  Two of the most common are autocorrelation and heteroscedasticity.25

In time-series models such as this one, successive data values are often correlated with one another.  This condition is known as autocorrelation, or serial correlation, and leads to inaccurate overestimation of a model’s “goodness of fit” because the degrees of freedom are often reduced.26  Essentially, the errors or trends that occur in one time period carry-over or affect the errors in another time period.27

Graphical Example of Serial Correlation or a Data Trend

Heteroscedasticity is another common problem that occurs when the variance of the error term is not the same for all observations.28  The presence of heteroscedascity produces bias of the variables’ standard errors.29

Graphical Example of Heteroscedascity

The Durban Watson (DW) Statistic and a series of diagnostic tests were used to detect the presence of autocorrelation and heteroscedascity.  The Durban-Watson (DW) statistic measures autocorrelation in the error terms of a regression.30  In this model, the DW statistic is .79436.  Since the value is substantially < 1, there is evidence of positive serial correlation.  Small values indicate successive error terms are, on average, close in value to one another, or positively correlated.  To determine positive autocorrelation, the test statistic “d” is compared to lower and upper critical values (dL,? and dU,?) based on a standard significance level of .05, the number of observations in the model (128), and the number of predictors in the regression equation (4).  With more than 100 observations and 4 explanatory variables, the critical values for the DW test are: 1.59 (lower limit) and 1.76 (upper limit).  .79436 < 1.59, indicating positive autocorrelation exists in this model.

The first Diagnostic Test performed to discover weakness in the model was Lagrange Multiplier (LM) Test of residual serial correlation (Bruesch-Godfrey Test).31  The consequences of ignoring or missing autocorrelation (serial correlation) are many.  They include: parameter estimates are unbiased but are inefficient, positive serial correlation standard errors are biased downward, increased probability of a type I error, and R-square is inflated.32   The following hypothesis were constructed:

H0: No serial correlation

H1: There is serial correlation

This test’s computed Chi-square = 47.3546, with 4 “df,” and a p-value of .000.  This falls in the rejection region.  At the .05 level of significance, the results are statistically significant.  Since the probability <.05, there is autocorrelation.  The “F” version confirms the results of the LM version.  At “df” of (4,119), the computed “F” is 17.469, with a p-value of .000.  At the .05 level of significance, the critical values of “F” lie between 2.46 and 2.42.  17.469 >2.46>2.42, thus, the results lie in the rejection region.  And, .000<.05; thus, the results are statistically significant.

Another test used was the Ramsey’s RESET test of square of the fitted values.  This test is used to determine if a model is using the correct functional form.33  The following hypotheses were created to test for functional form:

H0: ?P,Y, MG = 0

H1: ?P, Y, MG ? 0

If the null-hypothesis that all regression coefficients of the non-linear terms are zero is rejected, then the model suffers from mis-specification.34 This test’s computed Chi-square = 2.4688, with df = 1, p-value/alpha =.1116.  Assuming a level of significance of .05, this value falls in the acceptance region because it lies between .455 (alpha =.5) and 2.706 (alpha =.10).  Additionally, the p-value is higher (at .116) than .05, so the results are statistically insignificant.  This means, we don’t reject the null hypothesis, and the model is using the correct functional form.  The model should remain logarithmic.  The computed “F” statistic = 2.3994, with df (1,122), p-value/alpha =.124.  Assuming a level of significance of .05, the critical values are 3.89 and 3.94.  The high p-value of .124 indicates the results are statistically insignificant.  Thus, we don’t reject the null hypothesis.  This confirms the results of the Chi-square distribution above.

A test of skewness and kurtosis of residuals was used to analyze the model’s distribution.

The hypotheses for testing the normality of the distribution are as follows:35

/

H0: distribution ~ N

H1: distribution ~ N

The computed chi-square = 10.522 (probability = .005), df=2.   This falls in the rejection region.  The critical value is 5.991, df =2, alpha =.05.  This suggests that the regression’s distribution is not normal.  In other words, the distribution is not asymmetric (data is evenly distributed around the mean), and may have a positive or negative skew, or may have a higher, or lower peak than an asymmetric distribution.

The final test for model deficiencies is based on the regression of the squared residuals on squared fitted values.   When using some statistical techniques, such as ordinary least squares (OLS), a number of assumptions are typically made. One of these is that the error term has a constant variance. This will be true if the observations of the error term are assumed to be drawn from identical distributions.  Heteroscedascity is a violation of this assumption.  For example, the error term could vary or increase with each observation, something that is often the case with cross-sectional or time-series measurements. This test’s computed Chi-square is 3.9152, with df=1, and p-value=.0048.  At a .05 level of significance, the critical value is 3.84.  Thus, the results lie in the acceptance region.  This suggests heteroscedascity.  The computed “F” statistic is 3.9756, with df (1,126), and a p-value of .0048.   At a .05 level of significance, the critical values lie between 3.89 and 3.94.  Thus, the results lie in the acceptance region.  This also suggests heteroscedascity.

Based on the results of the Durbin-Watson and the Diagnostic Tests, we conclude that the model is inefficient.  The tests provided evidence of both heteroscedascity and autocorrelation.  Despite the values of R-square, Adjusted R-square and “F” Statistic, there are problems with the variables used, and the model does not best explain changes in the personal consumption of kerosene.

How should these results be interpreted?  Positive autocorrelation indicates the independent variables affect each other and that the accuracy or “goodness of fit” of the model is incorrect.  In other words, R-squared is higher than it should be.  For example, the effects of the log of miles per gallon and of population increases of people over aged sixteen may overlap, since sixteen is the “driving age” in the United States.  As more people reach the age of sixteen, more people will drive, leading to a direct increase in miles per gallon.  It is suggested that one or more of the variables be removed to improve the model.

There are several microeconomic implications of the model.  One is that individuals and households make purchase decisions based on income levels.  The more a person earns, the greater the ability he or she has to allocate limited resources.  Thus, a person with greater income can purchase more kerosene.  Another implication is that price affects personal consumption.  If the cost of a good increases, the demand for that good decreases.  If the cost of a good decreases, the demand for that good increases.  Thus, if kerosene prices increases, kerosene consumption decreases.  A final implication is that as overall demand of a product increases, the supply decreases, and the price of that good increases.

References

1Reusswig, F., Lotze-Campen, H. & Gerlinger, K. Changing Global Lifestyle and Consumption Patterns: The Case of Energy and Food. Potsdam Institute for Climate Impact Research (PIK)

Global Change & Social Systems Department, 2003.

2The Oxford American College Dictionary. Edited by C.A. Lindberg. New York, Oxford University Press, Inc., 2002.

3U.S. Department of Commerce. NIST/SEMATECH e-Handbook of Statistical Methods. National Institute of Standards and Technology, 2000, retrieved 18 May 2008, <http://www.itl.nist.gov/div898/handbook/>.

4Functional Form, retrieved 18 May 2008, <http://cmapskm.ihmc.us/servlet/SBRead ResourceServlet?rid=1052458916298_870839951_7777>.

5Ibid.

6Ibid.

7 Gujarati, D. Basic Econometrics, 4th ed. New York, McGraw-Hill, 2003.

8 Functional Form, retrieved 18 May 2008, <http://cmapskm.ihmc.us/servlet/SBRead ResourceServlet?rid=1052458916298_870839951_7777>.

9 Gujarati, D. Basic Econometrics, 4th ed. New York, McGraw-Hill, 2003.

10Ibid.

11Ibid.

12Ibid.

13Ibid.

14Ibid.

15Ibid.

16 U.S. Department of Commerce. NIST/SEMATECH e-Handbook of Statistical Methods. National Institute of Standards and Technology, 2000, retrieved 18 May 2008, <http://www.itl.nist.gov/div898/handbook/>.

17 Gujarati, D. Basic Econometrics, 4th ed. New York, McGraw-Hill, 2003.

18Ibid.

19Ibid.

20Ibid.

21Ibid.

22Ibid.

23Ibid.

24Ibid.

25Ibid.

26Ibid.

27Ibid.

28Ibid.

29Ibid.

30Ibid.

31Ibid.

32Ibid.

33Ibid.

34Ibid.

35Ibid.