mirror of
https://github.com/krahets/hello-algo.git
synced 2026-04-24 10:33:34 +08:00
Revisit the English version (#1835)
* Review the English version using Claude-4.5. * Update mkdocs.yml * Align the section titles. * Bug fixes
This commit is contained in:
@@ -1,6 +1,6 @@
|
||||
# Array representation of binary trees
|
||||
|
||||
Under the linked list representation, the storage unit of a binary tree is a node `TreeNode`, with nodes connected by pointers. The basic operations of binary trees under the linked list representation were introduced in the previous section.
|
||||
Under the linked list representation, the storage unit of a binary tree is a node `TreeNode`, and nodes are connected by pointers. The previous section introduced the basic operations of binary trees under the linked list representation.
|
||||
|
||||
So, can we use an array to represent a binary tree? The answer is yes.
|
||||
|
||||
@@ -8,7 +8,7 @@ So, can we use an array to represent a binary tree? The answer is yes.
|
||||
|
||||
Let's analyze a simple case first. Given a perfect binary tree, we store all nodes in an array according to the order of level-order traversal, where each node corresponds to a unique array index.
|
||||
|
||||
Based on the characteristics of level-order traversal, we can deduce a "mapping formula" between the index of a parent node and its children: **If a node's index is $i$, then the index of its left child is $2i + 1$ and the right child is $2i + 2$**. The figure below shows the mapping relationship between the indices of various nodes.
|
||||
Based on the characteristics of level-order traversal, we can derive a "mapping formula" between parent node index and child node indices: **If a node's index is $i$, then its left child index is $2i + 1$ and its right child index is $2i + 2$**. The figure below shows the mapping relationships between various node indices.
|
||||
|
||||

|
||||
|
||||
@@ -16,7 +16,7 @@ Based on the characteristics of level-order traversal, we can deduce a "mapping
|
||||
|
||||
## Representing any binary tree
|
||||
|
||||
Perfect binary trees are a special case; there are often many `None` values in the middle levels of a binary tree. Since the sequence of level-order traversal does not include these `None` values, we cannot solely rely on this sequence to deduce the number and distribution of `None` values. **This means that multiple binary tree structures can match the same level-order traversal sequence**.
|
||||
Perfect binary trees are a special case; in the middle levels of a binary tree, there are typically many `None` values. Since the level-order traversal sequence does not include these `None` values, we cannot infer the number and distribution of `None` values based on this sequence alone. **This means multiple binary tree structures can correspond to the same level-order traversal sequence**.
|
||||
|
||||
As shown in the figure below, given a non-perfect binary tree, the above method of array representation fails.
|
||||
|
||||
@@ -117,13 +117,15 @@ To solve this problem, **we can consider explicitly writing out all `None` value
|
||||
```kotlin title=""
|
||||
/* Array representation of a binary tree */
|
||||
// Using null to represent empty slots
|
||||
val tree = mutableListOf( 1, 2, 3, 4, null, 6, 7, 8, 9, null, null, 12, null, null, 15 )
|
||||
val tree = arrayOf( 1, 2, 3, 4, null, 6, 7, 8, 9, null, null, 12, null, null, 15 )
|
||||
```
|
||||
|
||||
=== "Ruby"
|
||||
|
||||
```ruby title=""
|
||||
|
||||
### Array representation of a binary tree ###
|
||||
# Using nil to represent empty slots
|
||||
tree = [1, 2, 3, 4, nil, 6, 7, 8, 9, nil, nil, 12, nil, nil, 15]
|
||||
```
|
||||
|
||||
=== "Zig"
|
||||
@@ -134,7 +136,7 @@ To solve this problem, **we can consider explicitly writing out all `None` value
|
||||
|
||||

|
||||
|
||||
It's worth noting that **complete binary trees are very suitable for array representation**. Recalling the definition of a complete binary tree, `None` appears only at the bottom level and towards the right, **meaning all `None` values definitely appear at the end of the level-order traversal sequence**.
|
||||
It's worth noting that **complete binary trees are very well-suited for array representation**. Recalling the definition of a complete binary tree, `None` only appears at the bottom level and towards the right, **meaning all `None` values must appear at the end of the level-order traversal sequence**.
|
||||
|
||||
This means that when using an array to represent a complete binary tree, it's possible to omit storing all `None` values, which is very convenient. The figure below gives an example.
|
||||
|
||||
@@ -142,8 +144,8 @@ This means that when using an array to represent a complete binary tree, it's po
|
||||
|
||||
The following code implements a binary tree based on array representation, including the following operations:
|
||||
|
||||
- Given a node, obtain its value, left (right) child node, and parent node.
|
||||
- Obtain the pre-order, in-order, post-order, and level-order traversal sequences.
|
||||
- Given a certain node, obtain its value, left (right) child node, and parent node.
|
||||
- Obtain the preorder, inorder, postorder, and level-order traversal sequences.
|
||||
|
||||
```src
|
||||
[file]{array_binary_tree}-[class]{array_binary_tree}-[func]{}
|
||||
@@ -153,12 +155,12 @@ The following code implements a binary tree based on array representation, inclu
|
||||
|
||||
The array representation of binary trees has the following advantages:
|
||||
|
||||
- Arrays are stored in contiguous memory spaces, which is cache-friendly and allows for faster access and traversal.
|
||||
- Arrays are stored in contiguous memory space, which is cache-friendly, allowing faster access and traversal.
|
||||
- It does not require storing pointers, which saves space.
|
||||
- It allows random access to nodes.
|
||||
|
||||
However, the array representation also has some limitations:
|
||||
|
||||
- Array storage requires contiguous memory space, so it is not suitable for storing trees with a large amount of data.
|
||||
- Adding or deleting nodes requires array insertion and deletion operations, which are less efficient.
|
||||
- Adding or removing nodes requires array insertion and deletion operations, which have lower efficiency.
|
||||
- When there are many `None` values in the binary tree, the proportion of node data contained in the array is low, leading to lower space utilization.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# AVL tree *
|
||||
|
||||
In the "Binary Search Tree" section, we mentioned that after multiple insertions and removals, a binary search tree might degrade to a linked list. In such cases, the time complexity of all operations degrades from $O(\log n)$ to $O(n)$.
|
||||
In the "Binary Search Tree" section, we mentioned that after multiple insertion and removal operations, a binary search tree may degenerate into a linked list. In this case, the time complexity of all operations degrades from $O(\log n)$ to $O(n)$.
|
||||
|
||||
As shown in the figure below, after two node removal operations, this binary search tree will degrade into a linked list.
|
||||
|
||||
@@ -10,11 +10,11 @@ For example, in the perfect binary tree shown in the figure below, after inserti
|
||||
|
||||

|
||||
|
||||
In 1962, G. M. Adelson-Velsky and E. M. Landis proposed the <u>AVL Tree</u> in their paper "An algorithm for the organization of information". The paper detailed a series of operations to ensure that after continuously adding and removing nodes, the AVL tree would not degrade, thus maintaining the time complexity of various operations at $O(\log n)$ level. In other words, in scenarios where frequent additions, removals, searches, and modifications are needed, the AVL tree can always maintain efficient data operation performance, which has great application value.
|
||||
In 1962, G. M. Adelson-Velsky and E. M. Landis proposed the <u>AVL tree</u> in their paper "An algorithm for the organization of information". The paper described in detail a series of operations ensuring that after continuously adding and removing nodes, the AVL tree does not degenerate, thus keeping the time complexity of various operations at the $O(\log n)$ level. In other words, in scenarios requiring frequent insertions, deletions, searches, and modifications, the AVL tree can always maintain efficient data operation performance, making it very valuable in applications.
|
||||
|
||||
## Common terminology in AVL trees
|
||||
|
||||
An AVL tree is both a binary search tree and a balanced binary tree, satisfying all properties of these two types of binary trees, hence it is a <u>balanced binary search tree</u>.
|
||||
An AVL tree is both a binary search tree and a balanced binary tree, simultaneously satisfying all the properties of these two types of binary trees, hence it is a <u>balanced binary search tree</u>.
|
||||
|
||||
### Node height
|
||||
|
||||
@@ -180,7 +180,7 @@ Since the operations related to AVL trees require obtaining node heights, we nee
|
||||
|
||||
```c title=""
|
||||
/* AVL tree node */
|
||||
TreeNode struct TreeNode {
|
||||
typedef struct TreeNode {
|
||||
int val;
|
||||
int height;
|
||||
struct TreeNode *left;
|
||||
@@ -214,7 +214,18 @@ Since the operations related to AVL trees require obtaining node heights, we nee
|
||||
=== "Ruby"
|
||||
|
||||
```ruby title=""
|
||||
### AVL tree node class ###
|
||||
class TreeNode
|
||||
attr_accessor :val # Node value
|
||||
attr_accessor :height # Node height
|
||||
attr_accessor :left # Left child reference
|
||||
attr_accessor :right # Right child reference
|
||||
|
||||
def initialize(val)
|
||||
@val = val
|
||||
@height = 0
|
||||
end
|
||||
end
|
||||
```
|
||||
|
||||
=== "Zig"
|
||||
@@ -231,7 +242,7 @@ The "node height" refers to the distance from that node to its farthest leaf nod
|
||||
|
||||
### Node balance factor
|
||||
|
||||
The <u>balance factor</u> of a node is defined as the height of the node's left subtree minus the height of its right subtree, with the balance factor of a null node defined as $0$. We will also encapsulate the functionality of obtaining the node balance factor into a function for easy use later on:
|
||||
The <u>balance factor</u> of a node is defined as the height of the node's left subtree minus the height of its right subtree, and the balance factor of a null node is defined as $0$. We also encapsulate the function to obtain the node's balance factor for convenient subsequent use:
|
||||
|
||||
```src
|
||||
[file]{avl_tree}-[class]{avl_tree}-[func]{balance_factor}
|
||||
@@ -243,13 +254,13 @@ The <u>balance factor</u> of a node is defined as the height of the node's left
|
||||
|
||||
## Rotations in AVL trees
|
||||
|
||||
The characteristic feature of an AVL tree is the "rotation" operation, which can restore balance to an unbalanced node without affecting the in-order traversal sequence of the binary tree. In other words, **the rotation operation can maintain the property of a "binary search tree" while also turning the tree back into a "balanced binary tree"**.
|
||||
The characteristic of AVL trees lies in the "rotation" operation, which can restore balance to unbalanced nodes without affecting the inorder traversal sequence of the binary tree. In other words, **rotation operations can both maintain the property of a "binary search tree" and make the tree return to a "balanced binary tree"**.
|
||||
|
||||
We call nodes with an absolute balance factor $> 1$ "unbalanced nodes". Depending on the type of imbalance, there are four kinds of rotations: right rotation, left rotation, right-left rotation, and left-right rotation. Below, we detail these rotation operations.
|
||||
We call nodes with a balance factor absolute value $> 1$ "unbalanced nodes". Depending on the imbalance situation, rotation operations are divided into four types: right rotation, left rotation, left rotation then right rotation, and right rotation then left rotation. Below we describe these rotation operations in detail.
|
||||
|
||||
### Right rotation
|
||||
|
||||
As shown in the figure below, the first unbalanced node from the bottom up in the binary tree is "node 3". Focusing on the subtree with this unbalanced node as the root, denoted as `node`, and its left child as `child`, perform a "right rotation". After the right rotation, the subtree is balanced again while still maintaining the properties of a binary search tree.
|
||||
As shown in the figure below, the value below the node is the balance factor. From bottom to top, the first unbalanced node in the binary tree is "node 3". We focus on the subtree with this unbalanced node as the root, denoting the node as `node` and its left child as `child`, and perform a "right rotation" operation. After the right rotation is completed, the subtree regains balance and still maintains the properties of a binary search tree.
|
||||
|
||||
=== "<1>"
|
||||

|
||||
@@ -283,42 +294,42 @@ Similarly, as shown in the figure below, when the `child` node has a left child
|
||||
|
||||

|
||||
|
||||
It can be observed that **the right and left rotation operations are logically symmetrical, and they solve two symmetrical types of imbalance**. Based on symmetry, by replacing all `left` with `right`, and all `right` with `left` in the implementation code of right rotation, we can get the implementation code for left rotation:
|
||||
It can be observed that **right rotation and left rotation operations are mirror symmetric in logic, and the two imbalance cases they solve are also symmetric**. Based on symmetry, we only need to replace all `left` in the right rotation implementation code with `right`, and all `right` with `left`, to obtain the left rotation implementation code:
|
||||
|
||||
```src
|
||||
[file]{avl_tree}-[class]{avl_tree}-[func]{left_rotate}
|
||||
```
|
||||
|
||||
### Left-right rotation
|
||||
### Left rotation then right rotation
|
||||
|
||||
For the unbalanced node 3 shown in the figure below, using either left or right rotation alone cannot restore balance to the subtree. In this case, a "left rotation" needs to be performed on `child` first, followed by a "right rotation" on `node`.
|
||||
For the unbalanced node 3 in the figure below, using either left rotation or right rotation alone cannot restore the subtree to balance. In this case, a "left rotation" needs to be performed on `child` first, followed by a "right rotation" on `node`.
|
||||
|
||||

|
||||
|
||||
### Right-left rotation
|
||||
### Right rotation then left rotation
|
||||
|
||||
As shown in the figure below, for the mirror case of the above unbalanced binary tree, a "right rotation" needs to be performed on `child` first, followed by a "left rotation" on `node`.
|
||||
As shown in the figure below, for the mirror case of the above unbalanced binary tree, a "right rotation" needs to be performed on `child` first, then a "left rotation" on `node`.
|
||||
|
||||

|
||||
|
||||
### Choice of rotation
|
||||
|
||||
The four kinds of imbalances shown in the figure below correspond to the cases described above, respectively requiring right rotation, left-right rotation, right-left rotation, and left rotation.
|
||||
The four imbalances shown in the figure below correspond one-to-one with the above cases, requiring right rotation, left rotation then right rotation, right rotation then left rotation, and left rotation operations respectively.
|
||||
|
||||

|
||||
|
||||
As shown in the table below, we determine which of the above cases an unbalanced node belongs to by judging the sign of the balance factor of the unbalanced node and its higher-side child's balance factor.
|
||||
As shown in the table below, we determine which case the unbalanced node belongs to by judging the signs of the balance factor of the unbalanced node and the balance factor of its taller-side child node.
|
||||
|
||||
<p align="center"> Table <id> Conditions for Choosing Among the Four Rotation Cases </p>
|
||||
|
||||
| Balance factor of unbalanced node | Balance factor of child node | Rotation method to use |
|
||||
| --------------------------------- | ---------------------------- | --------------------------------- |
|
||||
| $> 1$ (Left-leaning tree) | $\geq 0$ | Right rotation |
|
||||
| $> 1$ (Left-leaning tree) | $<0$ | Left rotation then right rotation |
|
||||
| $< -1$ (Right-leaning tree) | $\leq 0$ | Left rotation |
|
||||
| $< -1$ (Right-leaning tree) | $>0$ | Right rotation then left rotation |
|
||||
| Balance factor of the unbalanced node | Balance factor of the child node | Rotation method to apply |
|
||||
| -------------------------------------- | --------------------------------- | --------------------------------- |
|
||||
| $> 1$ (left-leaning tree) | $\geq 0$ | Right rotation |
|
||||
| $> 1$ (left-leaning tree) | $<0$ | Left rotation then right rotation |
|
||||
| $< -1$ (right-leaning tree) | $\leq 0$ | Left rotation |
|
||||
| $< -1$ (right-leaning tree) | $>0$ | Right rotation then left rotation |
|
||||
|
||||
For convenience, we encapsulate the rotation operations into a function. **With this function, we can perform rotations on various kinds of imbalances, restoring balance to unbalanced nodes**. The code is as follows:
|
||||
For ease of use, we encapsulate the rotation operations into a function. **With this function, we can perform rotations for various imbalance situations, restoring balance to unbalanced nodes**. The code is as follows:
|
||||
|
||||
```src
|
||||
[file]{avl_tree}-[class]{avl_tree}-[func]{rotate}
|
||||
@@ -328,7 +339,7 @@ For convenience, we encapsulate the rotation operations into a function. **With
|
||||
|
||||
### Node insertion
|
||||
|
||||
The node insertion operation in AVL trees is similar to that in binary search trees. The only difference is that after inserting a node in an AVL tree, a series of unbalanced nodes may appear along the path from that node to the root node. Therefore, **we need to start from this node and perform rotation operations upwards to restore balance to all unbalanced nodes**. The code is as follows:
|
||||
The node insertion operation in AVL trees is similar in principle to that in binary search trees. The only difference is that after inserting a node in an AVL tree, a series of unbalanced nodes may appear on the path from that node to the root. Therefore, **we need to start from this node and perform rotation operations from bottom to top, restoring balance to all unbalanced nodes**. The code is as follows:
|
||||
|
||||
```src
|
||||
[file]{avl_tree}-[class]{avl_tree}-[func]{insert_helper}
|
||||
@@ -336,7 +347,7 @@ The node insertion operation in AVL trees is similar to that in binary search tr
|
||||
|
||||
### Node removal
|
||||
|
||||
Similarly, based on the method of removing nodes in binary search trees, rotation operations need to be performed from the bottom up to restore balance to all unbalanced nodes. The code is as follows:
|
||||
Similarly, on the basis of the binary search tree's node removal method, rotation operations need to be performed from bottom to top to restore balance to all unbalanced nodes. The code is as follows:
|
||||
|
||||
```src
|
||||
[file]{avl_tree}-[class]{avl_tree}-[func]{remove_helper}
|
||||
@@ -344,10 +355,10 @@ Similarly, based on the method of removing nodes in binary search trees, rotatio
|
||||
|
||||
### Node search
|
||||
|
||||
The node search operation in AVL trees is consistent with that in binary search trees and will not be detailed here.
|
||||
The node search operation in AVL trees is consistent with that in binary search trees, and will not be elaborated here.
|
||||
|
||||
## Typical applications of AVL trees
|
||||
|
||||
- Organizing and storing large amounts of data, suitable for scenarios with high-frequency searches and low-frequency intertions and removals.
|
||||
- Organizing and storing large-scale data, suitable for scenarios with high-frequency searches and low-frequency insertions and deletions.
|
||||
- Used to build index systems in databases.
|
||||
- Red-black trees are also a common type of balanced binary search tree. Compared to AVL trees, red-black trees have more relaxed balancing conditions, require fewer rotations for node insertion and removal, and have a higher average efficiency for node addition and removal operations.
|
||||
- Red-black trees are also a common type of balanced binary search tree. Compared to AVL trees, red-black trees have more relaxed balance conditions, require fewer rotation operations for node insertion and deletion, and have higher average efficiency for node addition and deletion operations.
|
||||
|
||||
@@ -13,7 +13,7 @@ We encapsulate the binary search tree as a class `BinarySearchTree` and declare
|
||||
|
||||
### Searching for a node
|
||||
|
||||
Given a target node value `num`, one can search according to the properties of the binary search tree. As shown in the figure below, we declare a node `cur`, start from the binary tree's root node `root`, and loop to compare the size between the node value `cur.val` and `num`.
|
||||
Given a target node value `num`, we can search according to the properties of the binary search tree. As shown in the figure below, we declare a node `cur` and start from the binary tree's root node `root`, looping to compare the node value `cur.val` with `num`.
|
||||
|
||||
- If `cur.val < num`, it means the target node is in `cur`'s right subtree, thus execute `cur = cur.right`.
|
||||
- If `cur.val > num`, it means the target node is in `cur`'s left subtree, thus execute `cur = cur.left`.
|
||||
@@ -31,7 +31,7 @@ Given a target node value `num`, one can search according to the properties of t
|
||||
=== "<4>"
|
||||

|
||||
|
||||
The search operation in a binary search tree works on the same principle as the binary search algorithm, eliminating half of the cases in each round. The number of loops is at most the height of the binary tree. When the binary tree is balanced, it uses $O(\log n)$ time. The example code is as follows:
|
||||
The search operation in a binary search tree works on the same principle as the binary search algorithm, both eliminating half of the cases in each round. The number of loop iterations is at most the height of the binary tree. When the binary tree is balanced, it uses $O(\log n)$ time. The example code is as follows:
|
||||
|
||||
```src
|
||||
[file]{binary_search_tree}-[class]{binary_search_tree}-[func]{search}
|
||||
@@ -39,17 +39,17 @@ The search operation in a binary search tree works on the same principle as the
|
||||
|
||||
### Inserting a node
|
||||
|
||||
Given an element `num` to be inserted, to maintain the property of the binary search tree "left subtree < root node < right subtree," the insertion operation proceeds as shown in the figure below.
|
||||
Given an element `num` to be inserted, in order to maintain the property of the binary search tree "left subtree < root node < right subtree," the insertion process is as shown in the figure below.
|
||||
|
||||
1. **Finding insertion position**: Similar to the search operation, start from the root node, loop downwards according to the size relationship between the current node value and `num`, until the leaf node is passed (traversed to `None`), then exit the loop.
|
||||
2. **Insert the node at this position**: Initialize the node `num` and place it where `None` was.
|
||||
1. **Finding the insertion position**: Similar to the search operation, start from the root node and loop downward searching according to the size relationship between the current node value and `num`, until passing the leaf node (traversing to `None`) and then exit the loop.
|
||||
2. **Insert the node at that position**: Initialize node `num` and place it at the `None` position.
|
||||
|
||||

|
||||
|
||||
In the code implementation, note the following two points.
|
||||
In the code implementation, note the following two points:
|
||||
|
||||
- The binary search tree does not allow duplicate nodes to exist; otherwise, its definition would be violated. Therefore, if the node to be inserted already exists in the tree, the insertion is not performed, and the node returns directly.
|
||||
- To perform the insertion operation, we need to use the node `pre` to save the node from the previous loop. This way, when traversing to `None`, we can get its parent node, thus completing the node insertion operation.
|
||||
- Binary search trees do not allow duplicate nodes; otherwise, it would violate its definition. Therefore, if the node to be inserted already exists in the tree, the insertion is not performed and it returns directly.
|
||||
- To implement the node insertion, we need to use node `pre` to save the node from the previous loop iteration. This way, when traversing to `None`, we can obtain its parent node, thereby completing the node insertion operation.
|
||||
|
||||
```src
|
||||
[file]{binary_search_tree}-[class]{binary_search_tree}-[func]{insert}
|
||||
@@ -59,7 +59,7 @@ Similar to searching for a node, inserting a node uses $O(\log n)$ time.
|
||||
|
||||
### Removing a node
|
||||
|
||||
First, find the target node in the binary tree, then remove it. Similar to inserting a node, we need to ensure that after the removal operation is completed, the property of the binary search tree "left subtree < root node < right subtree" is still satisfied. Therefore, based on the number of child nodes of the target node, we divide it into three cases: 0, 1, and 2, and perform the corresponding node removal operations.
|
||||
First, find the target node in the binary tree, then remove it. Similar to node insertion, we need to ensure that after the removal operation is completed, the binary search tree's property of "left subtree $<$ root node $<$ right subtree" is still maintained. Therefore, depending on the number of child nodes the target node has, we divide it into 0, 1, and 2 three cases, and execute the corresponding node removal operations.
|
||||
|
||||
As shown in the figure below, when the degree of the node to be removed is $0$, it means the node is a leaf node and can be directly removed.
|
||||
|
||||
@@ -69,12 +69,12 @@ As shown in the figure below, when the degree of the node to be removed is $1$,
|
||||
|
||||

|
||||
|
||||
When the degree of the node to be removed is $2$, we cannot remove it directly, but need to use a node to replace it. To maintain the property of the binary search tree "left subtree $<$ root node $<$ right subtree," **this node can be either the smallest node of the right subtree or the largest node of the left subtree**.
|
||||
When the degree of the node to be removed is $2$, we cannot directly remove it; instead, we need to use a node to replace it. To maintain the binary search tree's property of "left subtree $<$ root node $<$ right subtree," **this node can be either the smallest node in the right subtree or the largest node in the left subtree**.
|
||||
|
||||
Assuming we choose the smallest node of the right subtree (the next node in in-order traversal), then the removal operation proceeds as shown in the figure below.
|
||||
Assuming we choose the smallest node in the right subtree (the next node in the inorder traversal), the removal process is as shown in the figure below.
|
||||
|
||||
1. Find the next node in the "in-order traversal sequence" of the node to be removed, denoted as `tmp`.
|
||||
2. Replace the value of the node to be removed with `tmp`'s value, and recursively remove the node `tmp` in the tree.
|
||||
1. Find the next node of the node to be removed in the "inorder traversal sequence," denoted as `tmp`.
|
||||
2. Replace the value of the node to be removed with the value of `tmp`, and recursively remove node `tmp` in the tree.
|
||||
|
||||
=== "<1>"
|
||||

|
||||
@@ -88,25 +88,25 @@ Assuming we choose the smallest node of the right subtree (the next node in in-o
|
||||
=== "<4>"
|
||||

|
||||
|
||||
The operation of removing a node also uses $O(\log n)$ time, where finding the node to be removed requires $O(\log n)$ time, and obtaining the in-order traversal successor node requires $O(\log n)$ time. Example code is as follows:
|
||||
The node removal operation also uses $O(\log n)$ time, where finding the node to be removed requires $O(\log n)$ time, and obtaining the inorder successor node requires $O(\log n)$ time. Example code is as follows:
|
||||
|
||||
```src
|
||||
[file]{binary_search_tree}-[class]{binary_search_tree}-[func]{remove}
|
||||
```
|
||||
|
||||
### In-order traversal is ordered
|
||||
### Inorder traversal is ordered
|
||||
|
||||
As shown in the figure below, the in-order traversal of a binary tree follows the traversal order of "left $\rightarrow$ root $\rightarrow$ right," and a binary search tree satisfies the size relationship of "left child node $<$ root node $<$ right child node."
|
||||
As shown in the figure below, the inorder traversal of a binary tree follows the "left $\rightarrow$ root $\rightarrow$ right" traversal order, while the binary search tree satisfies the "left child node $<$ root node $<$ right child node" size relationship.
|
||||
|
||||
This means that when performing in-order traversal in a binary search tree, the next smallest node will always be traversed first, thus leading to an important property: **The sequence of in-order traversal in a binary search tree is ascending**.
|
||||
This means that when performing an inorder traversal in a binary search tree, the next smallest node is always traversed first, thus yielding an important property: **The inorder traversal sequence of a binary search tree is ascending**.
|
||||
|
||||
Using the ascending property of in-order traversal, obtaining ordered data in a binary search tree requires only $O(n)$ time, without the need for additional sorting operations, which is very efficient.
|
||||
Using the property of inorder traversal being ascending, we can obtain ordered data in a binary search tree in only $O(n)$ time, without the need for additional sorting operations, which is very efficient.
|
||||
|
||||

|
||||

|
||||
|
||||
## Efficiency of binary search trees
|
||||
|
||||
Given a set of data, we consider using an array or a binary search tree for storage. Observing the table below, the operations on a binary search tree all have logarithmic time complexity, which is stable and efficient. Arrays are more efficient than binary search trees only in scenarios involving frequent additions and infrequent searches or removals.
|
||||
Given a set of data, we consider using an array or a binary search tree for storage. Observing the table below, all operations in a binary search tree have logarithmic time complexity, providing stable and efficient performance. Arrays are more efficient than binary search trees only in scenarios with high-frequency additions and low-frequency searches and deletions.
|
||||
|
||||
<p align="center"> Table <id> Efficiency comparison between arrays and search trees </p>
|
||||
|
||||
@@ -116,7 +116,7 @@ Given a set of data, we consider using an array or a binary search tree for stor
|
||||
| Insert element | $O(1)$ | $O(\log n)$ |
|
||||
| Remove element | $O(n)$ | $O(\log n)$ |
|
||||
|
||||
Ideally, the binary search tree is "balanced," allowing any node can be found within $\log n$ loops.
|
||||
In the ideal case, a binary search tree is "balanced," such that any node can be found within $\log n$ loop iterations.
|
||||
|
||||
However, if we continuously insert and remove nodes in a binary search tree, it may degenerate into a linked list as shown in the figure below, where the time complexity of various operations also degrades to $O(n)$.
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Binary tree
|
||||
|
||||
A <u>binary tree</u> is a non-linear data structure that represents the hierarchical relationship between ancestors and descendants and embodies the divide-and-conquer logic of "splitting into two". Similar to a linked list, the basic unit of a binary tree is a node, and each node contains a value, a reference to its left child node, and a reference to its right child node.
|
||||
A <u>binary tree</u> is a non-linear data structure that represents the derivation relationship between "ancestors" and "descendants" and embodies the divide-and-conquer logic of "one divides into two". Similar to a linked list, the basic unit of a binary tree is a node, and each node contains a value, a reference to its left child node, and a reference to its right child node.
|
||||
|
||||
=== "Python"
|
||||
|
||||
@@ -189,7 +189,16 @@ A <u>binary tree</u> is a non-linear data structure that represents the hierarch
|
||||
=== "Ruby"
|
||||
|
||||
```ruby title=""
|
||||
### Binary tree node class ###
|
||||
class TreeNode
|
||||
attr_accessor :val # Node value
|
||||
attr_accessor :left # Reference to left child node
|
||||
attr_accessor :right # Reference to right child node
|
||||
|
||||
def initialize(val)
|
||||
@val = val
|
||||
end
|
||||
end
|
||||
```
|
||||
|
||||
=== "Zig"
|
||||
@@ -432,7 +441,18 @@ Similar to a linked list, the initialization of a binary tree involves first cre
|
||||
=== "Ruby"
|
||||
|
||||
```ruby title="binary_tree.rb"
|
||||
|
||||
# Initializing a binary tree
|
||||
# Initializing nodes
|
||||
n1 = TreeNode.new(1)
|
||||
n2 = TreeNode.new(2)
|
||||
n3 = TreeNode.new(3)
|
||||
n4 = TreeNode.new(4)
|
||||
n5 = TreeNode.new(5)
|
||||
# Linking references (pointers) between nodes
|
||||
n1.left = n2
|
||||
n1.right = n3
|
||||
n2.left = n4
|
||||
n2.right = n5
|
||||
```
|
||||
|
||||
=== "Zig"
|
||||
@@ -594,7 +614,13 @@ Similar to a linked list, inserting and removing nodes in a binary tree can be a
|
||||
=== "Ruby"
|
||||
|
||||
```ruby title="binary_tree.rb"
|
||||
|
||||
# Inserting and removing nodes
|
||||
_p = TreeNode.new(0)
|
||||
# Inserting node _p between n1 and n2
|
||||
n1.left = _p
|
||||
_p.left = n2
|
||||
# Removing node _p
|
||||
n1.left = n2
|
||||
```
|
||||
|
||||
=== "Zig"
|
||||
@@ -615,7 +641,7 @@ Similar to a linked list, inserting and removing nodes in a binary tree can be a
|
||||
|
||||
### Perfect binary tree
|
||||
|
||||
As shown in the figure below, in a <u>perfect binary tree</u>, all levels are completely filled with nodes. In a perfect binary tree, leaf nodes have a degree of $0$, while all other nodes have a degree of $2$. The total number of nodes can be calculated as $2^{h+1} - 1$, where $h$ is the height of the tree. This exhibits a standard exponential relationship, reflecting the common phenomenon of cell division in nature.
|
||||
As shown in the figure below, a <u>perfect binary tree</u> has all levels completely filled with nodes. In a perfect binary tree, leaf nodes have a degree of $0$, while all other nodes have a degree of $2$. If the tree height is $h$, the total number of nodes is $2^{h+1} - 1$, exhibiting a standard exponential relationship that reflects the common phenomenon of cell division in nature.
|
||||
|
||||
!!! tip
|
||||
|
||||
@@ -625,13 +651,13 @@ As shown in the figure below, in a <u>perfect binary tree</u>, all levels are co
|
||||
|
||||
### Complete binary tree
|
||||
|
||||
As shown in the figure below, a <u>complete binary tree</u> is a binary tree where only the bottom level is possibly not completely filled, and nodes at the bottom level must be filled continuously from left to right. Note that a perfect binary tree is also a complete binary tree.
|
||||
As shown in the figure below, a <u>complete binary tree</u> only allows the bottom level to be incompletely filled, and the nodes at the bottom level must be filled continuously from left to right. Note that a perfect binary tree is also a complete binary tree.
|
||||
|
||||

|
||||
|
||||
### Full binary tree
|
||||
|
||||
As shown in the figure below, a <u>full binary tree</u>, except for the leaf nodes, has two child nodes for all other nodes.
|
||||
As shown in the figure below, in a <u>full binary tree</u>, all nodes except leaf nodes have two child nodes.
|
||||
|
||||

|
||||
|
||||
@@ -643,10 +669,10 @@ As shown in the figure below, in a <u>balanced binary tree</u>, the absolute dif
|
||||
|
||||
## Degeneration of binary trees
|
||||
|
||||
The figure below shows the ideal and degenerate structures of binary trees. A binary tree becomes a "perfect binary tree" when every level is filled; while it degenerates into a "linked list" when all nodes are biased toward one side.
|
||||
The figure below shows the ideal and degenerate structures of binary trees. When every level of a binary tree is filled, it reaches the "perfect binary tree" state; when all nodes are biased toward one side, the binary tree degenerates into a "linked list".
|
||||
|
||||
- A perfect binary tree is an ideal scenario where the "divide and conquer" advantage of a binary tree can be fully utilized.
|
||||
- On the other hand, a linked list represents another extreme where all operations become linear, resulting in a time complexity of $O(n)$.
|
||||
- A perfect binary tree is the ideal case, fully leveraging the "divide and conquer" advantage of binary trees.
|
||||
- A linked list represents the other extreme, where all operations become linear operations with time complexity degrading to $O(n)$.
|
||||
|
||||

|
||||
|
||||
|
||||
@@ -8,13 +8,13 @@ The common traversal methods for binary trees include level-order traversal, pre
|
||||
|
||||
As shown in the figure below, <u>level-order traversal</u> traverses the binary tree from top to bottom, layer by layer. Within each level, it visits nodes from left to right.
|
||||
|
||||
Level-order traversal is essentially a type of <u>breadth-first traversal</u>, also known as <u>breadth-first search (BFS)</u>, which embodies a "circumferentially outward expanding" layer-by-layer traversal method.
|
||||
Level-order traversal is essentially <u>breadth-first traversal</u>, also known as <u>breadth-first search (BFS)</u>, which embodies a "expanding outward circle by circle" layer-by-layer traversal method.
|
||||
|
||||

|
||||
|
||||
### Code implementation
|
||||
|
||||
Breadth-first traversal is usually implemented with the help of a "queue". The queue follows the "first in, first out" rule, while breadth-first traversal follows the "layer-by-layer progression" rule, the underlying ideas of the two are consistent. The implementation code is as follows:
|
||||
Breadth-first traversal is typically implemented with the help of a "queue". The queue follows the "first in, first out" rule, while breadth-first traversal follows the "layer-by-layer progression" rule; the underlying ideas of the two are consistent. The implementation code is as follows:
|
||||
|
||||
```src
|
||||
[file]{binary_tree_bfs}-[class]{}-[func]{level_order}
|
||||
@@ -22,16 +22,16 @@ Breadth-first traversal is usually implemented with the help of a "queue". The q
|
||||
|
||||
### Complexity analysis
|
||||
|
||||
- **Time complexity is $O(n)$**: All nodes are visited once, taking $O(n)$ time, where $n$ is the number of nodes.
|
||||
- **Space complexity is $O(n)$**: In the worst case, i.e., a full binary tree, before traversing to the bottom level, the queue can contain at most $(n + 1) / 2$ nodes simultaneously, occupying $O(n)$ space.
|
||||
- **Time complexity is $O(n)$**: All nodes are visited once, using $O(n)$ time, where $n$ is the number of nodes.
|
||||
- **Space complexity is $O(n)$**: In the worst case, i.e., a full binary tree, before traversing to the bottom level, the queue contains at most $(n + 1) / 2$ nodes simultaneously, occupying $O(n)$ space.
|
||||
|
||||
## Preorder, in-order, and post-order traversal
|
||||
## Preorder, inorder, and postorder traversal
|
||||
|
||||
Correspondingly, pre-order, in-order, and post-order traversal all belong to <u>depth-first traversal</u>, also known as <u>depth-first search (DFS)</u>, which embodies a "proceed to the end first, then backtrack and continue" traversal method.
|
||||
Correspondingly, preorder, inorder, and postorder traversals all belong to <u>depth-first traversal</u>, also known as <u>depth-first search (DFS)</u>, which embodies a "first go to the end, then backtrack and continue" traversal method.
|
||||
|
||||
The figure below shows the working principle of performing a depth-first traversal on a binary tree. **Depth-first traversal is like "walking" around the entire binary tree**, encountering three positions at each node, corresponding to pre-order, in-order, and post-order traversal.
|
||||
The figure below shows how depth-first traversal works on a binary tree. **Depth-first traversal is like "walking" around the perimeter of the entire binary tree**, encountering three positions at each node, corresponding to preorder, inorder, and postorder traversal.
|
||||
|
||||

|
||||

|
||||
|
||||
### Code implementation
|
||||
|
||||
@@ -45,13 +45,13 @@ Depth-first search is usually implemented based on recursion:
|
||||
|
||||
Depth-first search can also be implemented based on iteration, interested readers can study this on their own.
|
||||
|
||||
The figure below shows the recursive process of pre-order traversal of a binary tree, which can be divided into two opposite parts: "recursion" and "return".
|
||||
The figure below shows the recursive process of preorder traversal of a binary tree, which can be divided into two opposite parts: "recursion" and "return".
|
||||
|
||||
1. "Recursion" means starting a new method, the program accesses the next node in this process.
|
||||
2. "Return" means the function returns, indicating the current node has been fully accessed.
|
||||
1. "Recursion" means opening a new method, where the program accesses the next node in this process.
|
||||
2. "Return" means the function returns, indicating that the current node has been fully visited.
|
||||
|
||||
=== "<1>"
|
||||

|
||||

|
||||
|
||||
=== "<2>"
|
||||

|
||||
@@ -86,4 +86,4 @@ The figure below shows the recursive process of pre-order traversal of a binary
|
||||
### Complexity analysis
|
||||
|
||||
- **Time complexity is $O(n)$**: All nodes are visited once, using $O(n)$ time.
|
||||
- **Space complexity is $O(n)$**: In the worst case, i.e., the tree degenerates into a linked list, the recursion depth reaches $n$, the system occupies $O(n)$ stack frame space.
|
||||
- **Space complexity is $O(n)$**: In the worst case, i.e., the tree degenerates into a linked list, the recursion depth reaches $n$, and the system occupies $O(n)$ stack frame space.
|
||||
|
||||
@@ -4,6 +4,6 @@
|
||||
|
||||
!!! abstract
|
||||
|
||||
The towering tree exudes a vibrant essence, boasting profound roots and abundant foliage, yet its branches are sparsely scattered, creating an ethereal aura.
|
||||
|
||||
It shows us the vivid form of divide-and-conquer in data.
|
||||
Towering trees are full of vitality, with deep roots and lush leaves, spreading branches and flourishing.
|
||||
|
||||
They show us the vivid form of divide and conquer in data.
|
||||
@@ -2,17 +2,17 @@
|
||||
|
||||
### Key review
|
||||
|
||||
- A binary tree is a non-linear data structure that reflects the "divide and conquer" logic of splitting one into two. Each binary tree node contains a value and two pointers, which point to its left and right child nodes, respectively.
|
||||
- For a node in a binary tree, its left (right) child node and the tree formed below it are collectively called the node's left (right) subtree.
|
||||
- Terms related to binary trees include root node, leaf node, level, degree, edge, height, and depth.
|
||||
- The operations of initializing a binary tree, inserting nodes, and removing nodes are similar to those of linked list operations.
|
||||
- Common types of binary trees include perfect binary trees, complete binary trees, full binary trees, and balanced binary trees. The perfect binary tree represents the ideal state, while the linked list is the worst state after degradation.
|
||||
- A binary tree can be represented using an array by arranging the node values and empty slots in a level-order traversal sequence and implementing pointers based on the index mapping relationship between parent nodes and child nodes.
|
||||
- The level-order traversal of a binary tree is a breadth-first search method, which reflects a layer-by-layer traversal manner of "expanding circle by circle." It is usually implemented using a queue.
|
||||
- Pre-order, in-order, and post-order traversals are all depth-first search methods, reflecting the traversal manner of "going to the end first, then backtracking to continue." They are usually implemented using recursion.
|
||||
- A binary search tree is an efficient data structure for element searching, with the time complexity of search, insert, and remove operations all being $O(\log n)$. When a binary search tree degrades into a linked list, these time complexities deteriorate to $O(n)$.
|
||||
- An AVL tree, also known as a balanced binary search tree, ensures that the tree remains balanced after continuous node insertions and removals through rotation operations.
|
||||
- Rotation operations in an AVL tree include right rotation, left rotation, right-left rotation, and left-right rotation. After node insertion or removal, the AVL tree rebalances itself by performing these rotations in a bottom-up manner.
|
||||
- A binary tree is a non-linear data structure that embodies the divide-and-conquer logic of "one divides into two". Each binary tree node contains a value and two pointers, which respectively point to its left and right child nodes.
|
||||
- For a certain node in a binary tree, the tree formed by its left (right) child node and all nodes below is called the left (right) subtree of that node.
|
||||
- Related terminology of binary trees includes root node, leaf node, level, degree, edge, height, and depth.
|
||||
- The initialization, node insertion, and node removal operations of binary trees are similar to those of linked lists.
|
||||
- Common types of binary trees include perfect binary trees, complete binary trees, full binary trees, and balanced binary trees. The perfect binary tree is the ideal state, while the linked list is the worst state after degradation.
|
||||
- A binary tree can be represented using an array by arranging node values and empty slots in level-order traversal sequence, and implementing pointers based on the index mapping relationship between parent and child nodes.
|
||||
- Level-order traversal of a binary tree is a breadth-first search method, embodying a layer-by-layer traversal approach of "expanding outward circle by circle", typically implemented using a queue.
|
||||
- Preorder, inorder, and postorder traversals all belong to depth-first search, embodying a traversal approach of "first go to the end, then backtrack and continue", typically implemented using recursion.
|
||||
- A binary search tree is an efficient data structure for element searching, with search, insertion, and removal operations all having time complexity of $O(\log n)$. When a binary search tree degenerates into a linked list, all time complexities degrade to $O(n)$.
|
||||
- An AVL tree, also known as a balanced binary search tree, ensures the tree remains balanced after continuous node insertions and removals through rotation operations.
|
||||
- Rotation operations in AVL trees include right rotation, left rotation, left rotation then right rotation, and right rotation then left rotation. After inserting or removing nodes, AVL trees perform rotation operations from bottom to top to restore the tree to balance.
|
||||
|
||||
### Q & A
|
||||
|
||||
@@ -24,21 +24,21 @@ Yes, because height and depth are typically defined as "the number of edges pass
|
||||
|
||||
Taking the binary search tree as an example, the operation of removing a node needs to be handled in three different scenarios, each requiring multiple steps of node operations.
|
||||
|
||||
**Q**: Why are there three sequences: pre-order, in-order, and post-order for DFS traversal of a binary tree, and what are their uses?
|
||||
**Q**: Why does DFS traversal of binary trees have three orders: preorder, inorder, and postorder, and what are their uses?
|
||||
|
||||
Similar to sequential and reverse traversal of arrays, pre-order, in-order, and post-order traversals are three methods of traversing a binary tree, allowing us to obtain a traversal result in a specific order. For example, in a binary search tree, since the node sizes satisfy `left child node value < root node value < right child node value`, we can obtain an ordered node sequence by traversing the tree in the "left $\rightarrow$ root $\rightarrow$ right" priority.
|
||||
Similar to forward and reverse traversal of arrays, preorder, inorder, and postorder traversals are three methods of binary tree traversal that allow us to obtain a traversal result in a specific order. For example, in a binary search tree, since nodes satisfy the relationship `left child node value < root node value < right child node value`, we only need to traverse the tree with the priority of "left $\rightarrow$ root $\rightarrow$ right" to obtain an ordered node sequence.
|
||||
|
||||
**Q**: In a right rotation operation that deals with the relationship between the imbalance nodes `node`, `child`, `grand_child`, isn't the connection between `node` and its parent node and the original link of `node` lost after the right rotation?
|
||||
**Q**: In a right rotation operation handling the relationship between unbalanced nodes `node`, `child`, and `grand_child`, doesn't the connection between `node` and its parent node get lost after the right rotation?
|
||||
|
||||
We need to view this problem from a recursive perspective. The `right_rotate(root)` operation passes the root node of the subtree and eventually returns the root node of the rotated subtree with `return child`. The connection between the subtree's root node and its parent node is established after this function returns, which is outside the scope of the right rotation operation's maintenance.
|
||||
We need to view this problem from a recursive perspective. The right rotation operation `right_rotate(root)` passes in the root node of the subtree and eventually returns the root node of the subtree after rotation with `return child`. The connection between the subtree's root node and its parent node is completed after the function returns, which is not within the maintenance scope of the right rotation operation.
|
||||
|
||||
**Q**: In C++, functions are divided into `private` and `public` sections. What considerations are there for this? Why are the `height()` function and the `updateHeight()` function placed in `public` and `private`, respectively?
|
||||
|
||||
It depends on the scope of the method's use. If a method is only used within the class, then it is designed to be `private`. For example, it makes no sense for users to call `updateHeight()` on their own, as it is just a step in the insertion or removal operations. However, `height()` is for accessing node height, similar to `vector.size()`, thus it is set to `public` for use.
|
||||
It mainly depends on the method's usage scope. If a method is only used within the class, then it is designed as `private`. For example, calling `updateHeight()` alone by the user makes no sense, as it is only a step in insertion or removal operations. However, `height()` is used to access node height, similar to `vector.size()`, so it is set to `public` for ease of use.
|
||||
|
||||
**Q**: How do you build a binary search tree from a set of input data? Is the choice of root node very important?
|
||||
|
||||
Yes, the method for building the tree is provided in the `build_tree()` method in the binary search tree code. As for the choice of the root node, we usually sort the input data and then select the middle element as the root node, recursively building the left and right subtrees. This approach maximizes the balance of the tree.
|
||||
Yes, the method for building a tree is provided in the `build_tree()` method in the binary search tree code. As for the choice of root node, we typically sort the input data, then select the middle element as the root node, and recursively build the left and right subtrees. This approach maximizes the tree's balance.
|
||||
|
||||
**Q**: In Java, do you always have to use the `equals()` method for string comparison?
|
||||
|
||||
@@ -47,7 +47,7 @@ In Java, for primitive data types, `==` is used to compare whether the values of
|
||||
- `==`: Used to compare whether two variables point to the same object, i.e., whether their positions in memory are the same.
|
||||
- `equals()`: Used to compare whether the values of two objects are equal.
|
||||
|
||||
Therefore, to compare values, we should use `equals()`. However, strings initialized with `String a = "hi"; String b = "hi";` are stored in the string constant pool and point to the same object, so `a == b` can also be used to compare the contents of two strings.
|
||||
Therefore, if we want to compare values, we should use `equals()`. However, strings initialized via `String a = "hi"; String b = "hi";` are stored in the string constant pool and point to the same object, so `a == b` can also be used to compare the contents of the two strings.
|
||||
|
||||
**Q**: Before reaching the bottom level, is the number of nodes in the queue $2^h$ in breadth-first traversal?
|
||||
|
||||
|
||||
Reference in New Issue
Block a user