In a simple regression analysis there is a response or dependent variable (y) may be the number of species abundance or presence-absence of a single species and explanatory or independent variable (x). The purpose is to obtain a simple function of the independent variable, which is able to describe as closely as possible the variation of the dependent variable. As the observed values ??of the dependent variable generally differ from those predicted by the function, it has an error. The most effective role is one that describes the dependent variable with the least possible error or, in other words, with the smallest difference between observed and predicted values.
The difference between observed and predicted values ??(the error function) is called residual variation or debris. To estimate the parameters of the function using the least squares fit. However, with this type of strategy is necessary that the waste or errors are normally distributed and to vary similarly over the entire range of values ??of the dependent variable. These assumptions can be tested by examining the distribution of waste and its relationship with the dependent variable.
When the dependent variable is quantitative (e.g., the number of species) and the relationship between two variables is a straight line function of type y = c + bx, where c is the intercept or the cutoff value of the regression line with the axis of the dependent variable (a measure of the number of species present when the environmental variable has its minimum value) and b is the slope or regression coefficient (the rate of increase in the number of species per unit of environmental variables considered).
The simplest polynomial function is the quadratic (y = c + bx + bx2) that describes a parabola, but you can use a cubic function of an order or other even more capable of achieving a nearly perfect fit to the data. When the dependent variable is expressed in qualitative data (presence-absence of a species) we recommend using logistic regression analysis (y = [exp (c+bx)] / [1+exp (bx+c)]).
Regression Equation and Prediction for X and Y
X
-5
-3
4
1
-1
-2
0
2
3
-4
Y
-10
-8
9
1
-2
-6
-1
3
6
-82
For the data set given above, we have applied simple linear regression; the predictions for the data are given in the table below. The regression equation for the above data set can be given as: Y = -6.382 + 5.236X
Coefficientsa
Model
Unstandardized Coefficients
Standardized Coefficients
t
Sig.
B
Std. Error
Beta
1
(Constant)
-6.382
7.163
-.891
.399
X
5.236
2.457
.602
2.131
.066
a. Dependent Variable: Y
The above given regression equation shows that the magnitude of the intercept i.e. -6.382 has a negative impact on the overall regression equation and it can be observed that magnitude of slope i.e. 5.236 is impacting positively on the equation. However, the magnitude of sig. values for both the intercept and slope shows that the results are not significant, which means that the impact is due to chance and there may be some other factors also that are influencing the equation.
X = - 2
X = 4
y
-16.854
14.562
Regression Equation and Prediction for Final Exams Scores
The data below are the final exam scores of 10 randomly selected statistics students and the number ...