Last Updated on by

**What Are Decision Trees? Classifying Regression Trees vs
Classification Trees**

Tree based algorithms are undoubtedly the best & the most extensively used supervised learning methods. The use of Tree based algorithms can be extensively seen in the context of Predictive Modeling in Data Science. These algorithms will be inducing Predictive Models with effective features like high accuracy, stability and ease of interpretation. Using these algorithms we can even map non-linear relationships quite well.

The most
commonly used Tree based algorithms in Data Science include Decision Trees, Random
Forest, Gradient Boosting, etc. If you are a budding Data Scientist, then
having knowledge of these algorithms becomes very crucial. You can master
skills in the popular Tree based algorithms with our advanced **Data
Science Training In Hyderabad **program.

**What Is A
Decision Tree?**

Decision Tree is a type of supervised learning algorithm that is extensively used in the classification problems. In Decision Tree technique, we that sample set would get split into two or more homogeneous sets based on most significant splitter / differentiator in input variables.

**Classifying
Regression Trees vs Classification Trees**

- In the condition where the dependent variable is continuous we will be using Regression trees, whereas we will be using Classification trees when the dependent variable is categorical.
- In Regression Tree, the value which is obtained from the terminal nodes in the training data would be the mean response of observation falling in that region. So, if in case there’s any unseen data observation then its prediction will be made based on its mean value. In the case of Classification tree, the prediction of unseen data observation falls will be made with its mode value.
- Both Regression & Classification trees will be dividing the predictor space into distinct and non-overlapping regions.
- Both these techniques would be relying on greedy recursive binary splitting approach. This approach is called greedy as the algorithm will only considering about only the current split, rather considering future splits that might lead to a better tree.
- In both the cases, the splitting process is continuous until a user defined stopping criteria is reached.

Know more in-depth about the concept of Decision Trees through our Data Science training program.