Statistical Analysis

Read Complete Research Material

STATISTICAL ANALYSIS

Statistical Analysis



Statistical Analysis

Correlation Coefficient

Correlation Coefficient is a measure of the degree to which variations in one variable can be used to track variations in another. Given a series of pairs of data for, say, income and consumer spending in different years, one can see how far movements in income correlate with movements in consumer spending. Given the usual notation, r, and the coefficient equals 1 if there is a perfect correspondence in the variations, 0 if there is no correlation, and -1 if there is complete inverse correspondence (higher values of one variable perfectly correlate with lower values of the other).

In business, correlation analysis is very widespread, both explicitly and implicitly. However, the temptation to infer causality from correlation analysis should be resisted, e.g. simply because income and consumer spending correlate does not mean one causes the other; to make that inference there needs to be causal analysis and underlying theory.

Regression

Regression is a statistical relationship between a dependent variable and one or more independent variables. The standard technique for regression analysis fits a straight-line trend through a scatter of points (as shown in the figure), with the line placed to minimize the sum of the (squared) distances between it and the individual points. In the formula for a simple regression (i.e. with one independent variable), Y = a + bX ± e, Y is the dependent variable (shown on the vertical axis); X is the independent variable (shown on the horizontal axis); a is the intercept or constant term (i.e. the value of Y when X equals zero); b is the regression coefficient (i.e. the slope of the relationship, indicating the change in Y for each unit change in X); and e is the error term, indicating the degree of scatter of points around the line. The closer the fit of the line to all of the points, the larger the associated correlation.

Time-Series Analysis

The application of statistical and econometric methods to time-series data. Time-series analysis breaks data into components and projects them into the future (see figure). The four commonly recognized components are trend, seasonal, cycle, and irregular variation:

The trend component (T) is the general upward or downward movement of the average over time. These movements may require many years of data to determine or describe them. The basic forces underlying the trend include technological advances, productivity changes, inflation, and population change.

The seasonal component (S) is a recurring fluctuation of data points above or below the trend value that repeats with a usual frequency of one year, for example, Christmas sales.

Cyclical components (C) are recurrent upward and downward movements that repeat with a frequency that is longer than a year. This movement is attributed to business cycles (such as recession, inflation, unemployment, and prosperity), so the periodicity (recurrent rate) of such cycles does not have to be constant.

The irregular (or random) component is a series of short, erratic movements that follow no discernible pattern. It is caused by unpredictable or nonrecurring events such as floods, wars, strikes, elections, environmental ...
Related Ads