From the course: Python: Working with Predictive Analytics (2019)

Unlock the full course today

Join today to access over 24,700 courses taught by industry experts.

Random forest regression

Random forest regression

- [Instructor] This is the last algorithm of the modeling section, random forest regressor. We are moving from the one deep tree to a forest of relatively shallow trees. There comes the wisdom of the crowd with the collective opinion of the trees as opposed to a single tree. Random forest consists of multiple decision trees. It's based on ensemble learning, which means multiple learning methods are working together as a team. In other words, there's more than one decision tree in the model. Thus, all these individual trees get to cast their own vote. The main difference between regression and classification trees are, in regression trees, we take the mean. And in the classification trees, we take the mode, which is the most occurring value when making a prediction. In other words, the majority voting. This is one of the strongest algorithms among all, so what makes this strong? I'd like to introduce you to the term bagging. It's subdividing the data into smaller components, as if your…

Contents