The relationship between two variables is often not linear, but almost any function that can be written in closed form can be incorporated in a nonlinear regression model. Learn here how MaxStat can help to model data in nonlinear function.
Part 15 (Regression): Logistic Regression
Logistic regression is used to model relationships between dichotomous categorical outcomes (e.g., dead vs. alive, present vs. absent or yes vs. no). Read more in our latest lesson.
Part 14 (Regression): Multiple linear regression
A multiple linear regression analysis is carried out to predict the values of a dependent variable, Y, given a set of multiple explanatory variables x. Click here to go the lesson.
Part 13 (Regression): Simple linear regression
We often want to predict, or explain, one variable in terms of others. For example, how does the risk of heart disease vary with blood pressure? Or how does physical exercise decrease the level of cholesterol? Regression modeling can help with this kind of problem. In the simplest case, we determine if a linear relationship exist between a independent variable x and a dependent variable y. Learn more in Part 13 of your course.
Part 12 (Hypothesis testing): Contingency tables and Chi2 test
Chi-squared test is a statistical test applied to categorical data to evaluate how likely it is that any observed difference between the categories are significant. Contingency table shows how many subjects fall into each category. Learn in the new lesson how to construct a contingency table and perform a chi2 tests on the data.
Part 11 (Hypothesis testing): Two-way ANOVA
Let us continue in our lessons with ANOVA, this time with two-way ANOVA. It examines the relationship between a quantitative dependent variable and two qualitative independent variables.
Part 10 (Hypothesis testing): One-way ANOVA
One-way ANalysis Of VAriance (ANOVA) is used to compare several means. This method is often used in scientific or medical experiments when more than two treatments, processes, materials or products are being compared. So in ANOVA we are testing the null hypothesis, “Do all our data groups come from populations with the same mean?”. Read the new lesson.
Part 9 (Hypothesis testing): t-tests and nonparametric tests
We are moving on in our course to statistical hypothesis testing, including t-tests and ANOVA. They are commonly used in statistics, but as a non-statistician it can be difficult to select the right one. Here we describe the t-tests and non-parametric equivalents, so you can learn, which one to use for your data analysis.
Part 8 (Basic Statistics): Outlier detection
It is not appropriate to remove data from a group simply because they seems to someone to be unreasonably extreme. However, if with proper testing and scientific reasoning the data appear to be incorrect, they should be eliminated from the group. In lesson 8, we introduce the Grubbs test to detect outliers.
Part 7 (Basic Statistics): Normality
In statistics, we use normality tests to determine whether a data set follows a normal distribution or not, or to compute how likely an underlying random variable is to be normally distributed. We have already learned that with the choice between parametric and non-parametric tests it is very important to know whether our data follows a normal distribution or not. In this lesson you learn which statistical tests you can apply to check normality of your data.