From 04b028a56d9a803dfef7a1c4283f346eb26e2db5 Mon Sep 17 00:00:00 2001 From: krahets Date: Tue, 9 Jan 2024 04:42:00 +0800 Subject: [PATCH] build --- docs-en/chapter_array_and_linkedlist/array.md | 76 ++-- .../linked_list.md | 16 +- docs-en/chapter_array_and_linkedlist/list.md | 4 +- .../chapter_array_and_linkedlist/summary.md | 74 ++-- .../iteration_and_recursion.md | 32 +- .../space_complexity.md | 24 +- .../summary.md | 24 +- .../time_complexity.md | 48 +-- docs-en/chapter_data_structure/summary.md | 18 +- docs-en/chapter_stack_and_queue/deque.md | 400 ++++++++++++++++++ docs-en/chapter_stack_and_queue/index.md | 13 + docs-en/chapter_stack_and_queue/queue.md | 376 ++++++++++++++++ docs-en/chapter_stack_and_queue/stack.md | 384 +++++++++++++++++ docs-en/chapter_stack_and_queue/summary.md | 31 ++ docs/chapter_array_and_linkedlist/array.md | 27 +- .../linked_list.md | 19 +- docs/chapter_array_and_linkedlist/list.md | 22 +- .../backtracking_algorithm.md | 16 +- docs/chapter_backtracking/n_queens_problem.md | 4 +- .../permutations_problem.md | 8 +- .../subset_sum_problem.md | 12 +- .../iteration_and_recursion.md | 32 +- .../space_complexity.md | 24 +- .../time_complexity.md | 48 +-- .../basic_data_types.md | 3 +- .../binary_search_recur.md | 4 +- .../build_binary_tree_problem.md | 4 +- .../hanota_problem.md | 4 +- .../dp_problem_features.md | 12 +- .../dp_solution_pipeline.md | 16 +- .../edit_distance_problem.md | 8 +- .../intro_to_dynamic_programming.md | 20 +- .../knapsack_problem.md | 16 +- .../unbounded_knapsack_problem.md | 24 +- docs/chapter_graph/graph_operations.md | 8 +- docs/chapter_graph/graph_traversal.md | 8 +- .../fractional_knapsack_problem.md | 4 +- docs/chapter_greedy/greedy_algorithm.md | 4 +- docs/chapter_greedy/max_capacity_problem.md | 4 +- .../max_product_cutting_problem.md | 4 +- docs/chapter_hashing/hash_algorithm.md | 7 +- docs/chapter_hashing/hash_collision.md | 4 +- docs/chapter_hashing/hash_map.md | 10 +- docs/chapter_heap/build_heap.md | 4 +- docs/chapter_heap/heap.md | 15 +- docs/chapter_heap/top_k.md | 4 +- docs/chapter_searching/binary_search.md | 8 +- docs/chapter_searching/binary_search_edge.md | 8 +- .../binary_search_insertion.md | 8 +- .../replace_linear_by_hashing.md | 8 +- docs/chapter_sorting/bubble_sort.md | 8 +- docs/chapter_sorting/bucket_sort.md | 4 +- docs/chapter_sorting/counting_sort.md | 8 +- docs/chapter_sorting/heap_sort.md | 4 +- docs/chapter_sorting/insertion_sort.md | 4 +- docs/chapter_sorting/merge_sort.md | 4 +- docs/chapter_sorting/quick_sort.md | 16 +- docs/chapter_sorting/radix_sort.md | 4 +- docs/chapter_sorting/selection_sort.md | 4 +- docs/chapter_stack_and_queue/deque.md | 17 +- docs/chapter_stack_and_queue/queue.md | 11 +- docs/chapter_stack_and_queue/stack.md | 15 +- .../array_representation_of_tree.md | 4 +- docs/chapter_tree/binary_search_tree.md | 12 +- docs/chapter_tree/binary_tree.md | 6 +- docs/chapter_tree/binary_tree_traversal.md | 8 +- overrides/stylesheets/extra.css | 10 + 67 files changed, 1654 insertions(+), 436 deletions(-) create mode 100644 docs-en/chapter_stack_and_queue/deque.md create mode 100644 docs-en/chapter_stack_and_queue/index.md create mode 100755 docs-en/chapter_stack_and_queue/queue.md create mode 100755 docs-en/chapter_stack_and_queue/stack.md create mode 100644 docs-en/chapter_stack_and_queue/summary.md diff --git a/docs-en/chapter_array_and_linkedlist/array.md b/docs-en/chapter_array_and_linkedlist/array.md index 5e27eafda..98bc1c4cf 100755 --- a/docs-en/chapter_array_and_linkedlist/array.md +++ b/docs-en/chapter_array_and_linkedlist/array.md @@ -4,7 +4,7 @@ comments: true # 4.1   Arrays -The "array" is a linear data structure that stores elements of the same type in contiguous memory locations. We refer to the position of an element in the array as its "index". The following image illustrates the main terminology and concepts of an array. +An "array" is a linear data structure that operates as a lineup of similar items, stored together in a computer's memory in contiguous spaces. It's like a sequence that maintains organized storage. Each item in this lineup has its unique 'spot' known as an "index". Please refer to the Figure 4-1 to observe how arrays work and grasp these key terms. ![Array Definition and Storage Method](array.assets/array_definition.png){ class="animation-figure" } @@ -14,7 +14,7 @@ The "array" is a linear data structure that stores elements of the same type in ### 1.   Initializing Arrays -There are two ways to initialize arrays depending on the requirements: without initial values and with given initial values. In cases where initial values are not specified, most programming languages will initialize the array elements to $0$: +Arrays can be initialized in two ways depending on the needs: either without initial values or with specified initial values. When initial values are not specified, most programming languages will set the array elements to $0$: === "Python" @@ -121,13 +121,13 @@ There are two ways to initialize arrays depending on the requirements: without i ### 2.   Accessing Elements -Elements in an array are stored in contiguous memory locations, which makes it easy to compute the memory address of any element. Given the memory address of the array (the address of the first element) and the index of an element, we can calculate the memory address of that element using the formula shown in the following image, allowing direct access to the element. +Elements in an array are stored in contiguous memory spaces, making it simpler to compute each element's memory address. The formula shown in the Figure below aids in determining an element's memory address, utilizing the array's memory address (specifically, the first element's address) and the element's index. This computation streamlines direct access to the desired element. ![Memory Address Calculation for Array Elements](array.assets/array_memory_location_calculation.png){ class="animation-figure" }

Figure 4-2   Memory Address Calculation for Array Elements

-As observed in the above image, the index of the first element of an array is $0$, which may seem counterintuitive since counting starts from $1$. However, from the perspective of the address calculation formula, **an index is essentially an offset from the memory address**. The offset for the first element's address is $0$, making its index $0$ logical. +As observed in the above illustration, array indexing conventionally begins at $0$. While this might appear counterintuitive, considering counting usually starts at $1$, within the address calculation formula, **an index is essentially an offset from the memory address**. For the first element's address, this offset is $0$, validating its index as $0$. Accessing elements in an array is highly efficient, allowing us to randomly access any element in $O(1)$ time. @@ -289,18 +289,18 @@ Accessing elements in an array is highly efficient, allowing us to randomly acce ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > ### 3.   Inserting Elements -As shown in the image below, to insert an element in the middle of an array, all elements following the insertion point must be moved one position back to make room for the new element. +Array elements are tightly packed in memory, with no space available to accommodate additional data between them. Illustrated in Figure below, inserting an element in the middle of an array requires shifting all subsequent elements back by one position to create room for the new element. ![Array Element Insertion Example](array.assets/array_insert_element.png){ class="animation-figure" }

Figure 4-3   Array Element Insertion Example

-It's important to note that since the length of an array is fixed, inserting an element will inevitably lead to the loss of the last element in the array. We will discuss solutions to this problem in the "List" chapter. +It's important to note that due to the fixed length of an array, inserting an element will unavoidably result in the loss of the last element in the array. Solutions to address this issue will be explored in the "List" chapter. === "Python" @@ -471,18 +471,18 @@ It's important to note that since the length of an array is fixed, inserting an ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > ### 4.   Deleting Elements -Similarly, as illustrated below, to delete an element at index $i$, all elements following index $i$ must be moved forward by one position. +Similarly, as depicted in the Figure 4-4 , to delete an element at index $i$, all elements following index $i$ must be moved forward by one position. ![Array Element Deletion Example](array.assets/array_remove_element.png){ class="animation-figure" }

Figure 4-4   Array Element Deletion Example

-Note that after deletion, the last element becomes "meaningless", so we do not need to specifically modify it. +Please note that after deletion, the former last element becomes "meaningless," hence requiring no specific modification. === "Python" @@ -630,18 +630,18 @@ Note that after deletion, the last element becomes "meaningless", so we do not n ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > -Overall, the insertion and deletion operations in arrays have the following disadvantages: +In summary, the insertion and deletion operations in arrays present the following disadvantages: - **High Time Complexity**: Both insertion and deletion in an array have an average time complexity of $O(n)$, where $n$ is the length of the array. - **Loss of Elements**: Due to the fixed length of arrays, elements that exceed the array's capacity are lost during insertion. -- **Waste of Memory**: We can initialize a longer array and use only the front part, allowing the "lost" end elements during insertion to be "meaningless", but this leads to some wasted memory space. +- **Waste of Memory**: Initializing a longer array and utilizing only the front part results in "meaningless" end elements during insertion, leading to some wasted memory space. ### 5.   Traversing Arrays -In most programming languages, we can traverse an array either by indices or by directly iterating over each element: +In most programming languages, we can traverse an array either by using indices or by directly iterating over each element: === "Python" @@ -854,14 +854,14 @@ In most programming languages, we can traverse an array either by indices or by ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > ### 6.   Finding Elements -To find a specific element in an array, we need to iterate through it, checking each element to see if it matches. +Locating a specific element within an array involves iterating through the array, checking each element to determine if it matches the desired value. -Since arrays are linear data structures, this operation is known as "linear search". +Because arrays are linear data structures, this operation is commonly referred to as "linear search." === "Python" @@ -1022,14 +1022,14 @@ Since arrays are linear data structures, this operation is known as "linear sear ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > ### 7.   Expanding Arrays -In complex system environments, it's challenging to ensure that the memory space following an array is available, making it unsafe to extend the array's capacity. Therefore, in most programming languages, **the length of an array is immutable**. +In complex system environments, ensuring the availability of memory space after an array for safe capacity extension becomes challenging. Consequently, in most programming languages, **the length of an array is immutable**. -To expand an array, we need to create a larger array and then copy the elements from the original array. This operation has a time complexity of $O(n)$ and can be time-consuming for large arrays. The code is as follows: +To expand an array, it's necessary to create a larger array and then copy the elements from the original array. This operation has a time complexity of $O(n)$ and can be time-consuming for large arrays. The code are as follows: === "Python" @@ -1232,29 +1232,29 @@ To expand an array, we need to create a larger array and then copy the elements ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > ## 4.1.2   Advantages and Limitations of Arrays -Arrays are stored in contiguous memory spaces and consist of elements of the same type. This approach includes a wealth of prior information that the system can use to optimize the operation efficiency of the data structure. +Arrays are stored in contiguous memory spaces and consist of elements of the same type. This approach provides substantial prior information that systems can leverage to optimize the efficiency of data structure operations. - **High Space Efficiency**: Arrays allocate a contiguous block of memory for data, eliminating the need for additional structural overhead. - **Support for Random Access**: Arrays allow $O(1)$ time access to any element. -- **Cache Locality**: When accessing array elements, the computer not only loads them but also caches the surrounding data, leveraging high-speed cache to improve the speed of subsequent operations. +- **Cache Locality**: When accessing array elements, the computer not only loads them but also caches the surrounding data, utilizing high-speed cache to enchance subsequent operation speeds. However, continuous space storage is a double-edged sword, with the following limitations: -- **Low Efficiency in Insertion and Deletion**: When there are many elements in an array, insertion and deletion operations require moving a large number of elements. -- **Fixed Length**: The length of an array is fixed after initialization. Expanding an array requires copying all data to a new array, which is costly. -- **Space Wastage**: If the allocated size of an array exceeds the actual need, the extra space is wasted. +- **Low Efficiency in Insertion and Deletion**: As arrays accumulate many elements, inserting or deleting elements requires shifting a large number of elements. +- **Fixed Length**: The length of an array is fixed after initialization. Expanding an array requires copying all data to a new array, incurring significant costs. +- **Space Wastage**: If the allocated array size exceeds the what is necessary, the extra space is wasted. ## 4.1.3   Typical Applications of Arrays -Arrays are a fundamental and common data structure, frequently used in various algorithms and in implementing complex data structures. +Arrays are fundamental and widely used data structures. They find frequent application in various algorithms and serve in the implementation of complex data structures. -- **Random Access**: If we want to randomly sample some data, we can use an array for storage and generate a random sequence to implement random sampling based on indices. -- **Sorting and Searching**: Arrays are the most commonly used data structure for sorting and searching algorithms. Quick sort, merge sort, binary search, etc., are primarily conducted on arrays. -- **Lookup Tables**: Arrays can be used as lookup tables for fast element or relationship retrieval. For instance, if we want to implement a mapping from characters to ASCII codes, we can use the ASCII code value of a character as the index, with the corresponding element stored in the corresponding position in the array. -- **Machine Learning**: Arrays are extensively used in neural networks for linear algebra operations between vectors, matrices, and tensors. Arrays are the most commonly used data structure in neural network programming. -- **Data Structure Implementation**: Arrays can be used to implement stacks, queues, hash tables, heaps, graphs, etc. For example, the adjacency matrix representation of a graph is essentially a two-dimensional array. +- **Random Access**: Arrays are ideal for storing data when random sampling is required. By generating a random sequence based on indices, we can achieve random sampling efficiently. +- **Sorting and Searching**: Arrays are the most commonly used data structure for sorting and searching algorithms. Techniques like quick sort, merge sort, binary search, etc., are primarily operate on arrays. +- **Lookup Tables**: Arrays serve as efficient lookup tables for quick element or relationship retrieval. For instance, mapping characters to ASCII codes becomes seamless by using the ASCII code values as indices and storing corresponding elements in the array. +- **Machine Learning**: Within the domain of neural networks, arrays play a pivotal role in executing crucial linear algebra operations involving vectors, matrices, and tensors. Arrays serve as the primary and most extensively used data structure in neural network programming. +- **Data Structure Implementation**: Arrays serve as the building blocks for implementing various data structures like stacks, queues, hash tables, heaps, graphs, etc. For instance, the adjacency matrix representation of a graph is essentially a two-dimensional array. diff --git a/docs-en/chapter_array_and_linkedlist/linked_list.md b/docs-en/chapter_array_and_linkedlist/linked_list.md index 133c00a03..93ed7d037 100755 --- a/docs-en/chapter_array_and_linkedlist/linked_list.md +++ b/docs-en/chapter_array_and_linkedlist/linked_list.md @@ -542,8 +542,8 @@ In contrast, the time complexity of inserting an element in an array is $O(n)$, ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > ### 3.   Deleting a Node @@ -732,8 +732,8 @@ Note that although node `P` still points to `n1` after the deletion operation is ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > ### 4.   Accessing Nodes @@ -911,8 +911,8 @@ Note that although node `P` still points to `n1` after the deletion operation is ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > ### 5.   Finding Nodes @@ -1113,8 +1113,8 @@ Traverse the linked list to find a node with a value equal to `target`, and outp ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > ## 4.2.2   Arrays vs. Linked Lists diff --git a/docs-en/chapter_array_and_linkedlist/list.md b/docs-en/chapter_array_and_linkedlist/list.md index 342cbafde..93a720683 100755 --- a/docs-en/chapter_array_and_linkedlist/list.md +++ b/docs-en/chapter_array_and_linkedlist/list.md @@ -2121,5 +2121,5 @@ To deepen the understanding of how lists work, let's try implementing a simple v ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > diff --git a/docs-en/chapter_array_and_linkedlist/summary.md b/docs-en/chapter_array_and_linkedlist/summary.md index 865e4d2fe..ce5a70d88 100644 --- a/docs-en/chapter_array_and_linkedlist/summary.md +++ b/docs-en/chapter_array_and_linkedlist/summary.md @@ -18,68 +18,68 @@ comments: true ### 2.   Q & A -!!! question "Does storing arrays on the stack versus the heap affect time and space efficiency?" +**Q**: Does storing arrays on the stack versus the heap affect time and space efficiency? - Arrays stored on both the stack and heap are stored in continuous memory spaces, and data operation efficiency is essentially the same. However, stacks and heaps have their own characteristics, leading to the following differences. +Arrays stored on both the stack and heap are stored in continuous memory spaces, and data operation efficiency is essentially the same. However, stacks and heaps have their own characteristics, leading to the following differences. - 1. Allocation and release efficiency: The stack is a smaller memory block, allocated automatically by the compiler; the heap memory is relatively larger and can be dynamically allocated in the code, more prone to fragmentation. Therefore, allocation and release operations on the heap are generally slower than on the stack. - 2. Size limitation: Stack memory is relatively small, while the heap size is generally limited by available memory. Therefore, the heap is more suitable for storing large arrays. - 3. Flexibility: The size of arrays on the stack needs to be determined at compile-time, while the size of arrays on the heap can be dynamically determined at runtime. +1. Allocation and release efficiency: The stack is a smaller memory block, allocated automatically by the compiler; the heap memory is relatively larger and can be dynamically allocated in the code, more prone to fragmentation. Therefore, allocation and release operations on the heap are generally slower than on the stack. +2. Size limitation: Stack memory is relatively small, while the heap size is generally limited by available memory. Therefore, the heap is more suitable for storing large arrays. +3. Flexibility: The size of arrays on the stack needs to be determined at compile-time, while the size of arrays on the heap can be dynamically determined at runtime. -!!! question "Why do arrays require elements of the same type, while linked lists do not emphasize same-type elements?" +**Q**: Why do arrays require elements of the same type, while linked lists do not emphasize same-type elements? - Linked lists consist of nodes connected by references (pointers), and each node can store data of different types, such as int, double, string, object, etc. +Linked lists consist of nodes connected by references (pointers), and each node can store data of different types, such as int, double, string, object, etc. - In contrast, array elements must be of the same type, allowing the calculation of offsets to access the corresponding element positions. For example, an array containing both int and long types, with single elements occupying 4 bytes and 8 bytes respectively, cannot use the following formula to calculate offsets, as the array contains elements of two different lengths. +In contrast, array elements must be of the same type, allowing the calculation of offsets to access the corresponding element positions. For example, an array containing both int and long types, with single elements occupying 4 bytes and 8 bytes respectively, cannot use the following formula to calculate offsets, as the array contains elements of two different lengths. - ```shell - # Element memory address = Array memory address + Element length * Element index - ``` +```shell +# Element memory address = Array memory address + Element length * Element index +``` -!!! question "After deleting a node, is it necessary to set `P.next` to `None`?" +**Q**: After deleting a node, is it necessary to set `P.next` to `None`? - Not modifying `P.next` is also acceptable. From the perspective of the linked list, traversing from the head node to the tail node will no longer encounter `P`. This means that node `P` has been effectively removed from the list, and where `P` points no longer affects the list. +Not modifying `P.next` is also acceptable. From the perspective of the linked list, traversing from the head node to the tail node will no longer encounter `P`. This means that node `P` has been effectively removed from the list, and where `P` points no longer affects the list. - From a garbage collection perspective, for languages with automatic garbage collection mechanisms like Java, Python, and Go, whether node `P` is collected depends on whether there are still references pointing to it, not on the value of `P.next`. In languages like C and C++, we need to manually free the node's memory. +From a garbage collection perspective, for languages with automatic garbage collection mechanisms like Java, Python, and Go, whether node `P` is collected depends on whether there are still references pointing to it, not on the value of `P.next`. In languages like C and C++, we need to manually free the node's memory. -!!! question "In linked lists, the time complexity for insertion and deletion operations is `O(1)`. But searching for the element before insertion or deletion takes `O(n)` time, so why isn't the time complexity `O(n)`?" +**Q**: In linked lists, the time complexity for insertion and deletion operations is `O(1)`. But searching for the element before insertion or deletion takes `O(n)` time, so why isn't the time complexity `O(n)`? - If an element is searched first and then deleted, the time complexity is indeed `O(n)`. However, the `O(1)` advantage of linked lists in insertion and deletion can be realized in other applications. For example, in the implementation of double-ended queues using linked lists, we maintain pointers always pointing to the head and tail nodes, making each insertion and deletion operation `O(1)`. +If an element is searched first and then deleted, the time complexity is indeed `O(n)`. However, the `O(1)` advantage of linked lists in insertion and deletion can be realized in other applications. For example, in the implementation of double-ended queues using linked lists, we maintain pointers always pointing to the head and tail nodes, making each insertion and deletion operation `O(1)`. -!!! question "In the image 'Linked List Definition and Storage Method', do the light blue storage nodes occupy a single memory address, or do they share half with the node value?" +**Q**: In the image "Linked List Definition and Storage Method", do the light blue storage nodes occupy a single memory address, or do they share half with the node value? - The diagram is just a qualitative representation; quantitative analysis depends on specific situations. +The diagram is just a qualitative representation; quantitative analysis depends on specific situations. - - Different types of node values occupy different amounts of space, such as int, long, double, and object instances. - - The memory space occupied by pointer variables depends on the operating system and compilation environment used, usually 8 bytes or 4 bytes. +- Different types of node values occupy different amounts of space, such as int, long, double, and object instances. +- The memory space occupied by pointer variables depends on the operating system and compilation environment used, usually 8 bytes or 4 bytes. -!!! question "Is adding elements to the end of a list always `O(1)`?" +**Q**: Is adding elements to the end of a list always `O(1)`? - If adding an element exceeds the list length, the list needs to be expanded first. The system will request a new memory block and move all elements of the original list over, in which case the time complexity becomes `O(n)`. +If adding an element exceeds the list length, the list needs to be expanded first. The system will request a new memory block and move all elements of the original list over, in which case the time complexity becomes `O(n)`. -!!! question "The statement 'The emergence of lists greatly improves the practicality of arrays, but may lead to some memory space wastage' - does this refer to the memory occupied by additional variables like capacity, length, and expansion multiplier?" +**Q**: The statement "The emergence of lists greatly improves the practicality of arrays, but may lead to some memory space wastage" - does this refer to the memory occupied by additional variables like capacity, length, and expansion multiplier? - The space wastage here mainly refers to two aspects: on the one hand, lists are set with an initial length, which we may not always need; on the other hand, to prevent frequent expansion, expansion usually multiplies by a coefficient, such as $\times 1.5$. This results in many empty slots, which we typically cannot fully fill. +The space wastage here mainly refers to two aspects: on the one hand, lists are set with an initial length, which we may not always need; on the other hand, to prevent frequent expansion, expansion usually multiplies by a coefficient, such as $\times 1.5$. This results in many empty slots, which we typically cannot fully fill. -!!! question "In Python, after initializing `n = [1, 2, 3]`, the addresses of these 3 elements are contiguous, but initializing `m = [2, 1, 3]` shows that each element's `id` is not consecutive but identical to those in `n`. If the addresses of these elements are not contiguous, is `m` still an array?" +**Q**: In Python, after initializing `n = [1, 2, 3]`, the addresses of these 3 elements are contiguous, but initializing `m = [2, 1, 3]` shows that each element's `id` is not consecutive but identical to those in `n`. If the addresses of these elements are not contiguous, is `m` still an array? - If we replace list elements with linked list nodes `n = [n1, n2, n3, n4, n5]`, these 5 node objects are also typically dispersed throughout memory. However, given a list index, we can still access the node's memory address in `O(1)` time, thereby accessing the corresponding node. This is because the array stores references to the nodes, not the nodes themselves. +If we replace list elements with linked list nodes `n = [n1, n2, n3, n4, n5]`, these 5 node objects are also typically dispersed throughout memory. However, given a list index, we can still access the node's memory address in `O(1)` time, thereby accessing the corresponding node. This is because the array stores references to the nodes, not the nodes themselves. - Unlike many languages, in Python, numbers are also wrapped as objects, and lists store references to these numbers, not the numbers themselves. Therefore, we find that the same number in two arrays has the same `id`, and these numbers' memory addresses need not be contiguous. +Unlike many languages, in Python, numbers are also wrapped as objects, and lists store references to these numbers, not the numbers themselves. Therefore, we find that the same number in two arrays has the same `id`, and these numbers' memory addresses need not be contiguous. -!!! question "The `std::list` in C++ STL has already implemented a doubly linked list, but it seems that some algorithm books don't directly use it. Is there any limitation?" +**Q**: The `std::list` in C++ STL has already implemented a doubly linked list, but it seems that some algorithm books don't directly use it. Is there any limitation? - On the one hand, we often prefer to use arrays to implement algorithms, only using linked lists when necessary, mainly for two reasons. - - - Space overhead: Since each element requires two additional pointers (one for the previous element and one for the next), `std::list` usually occupies more space than `std::vector`. - - Cache unfriendly: As the data is not stored continuously, `std::list` has a lower cache utilization rate. Generally, `std::vector` performs better. +On the one hand, we often prefer to use arrays to implement algorithms, only using linked lists when necessary, mainly for two reasons. - On the other hand, linked lists are primarily necessary for binary trees and graphs. Stacks and queues are often implemented using the programming language's `stack` and `queue` classes, rather than linked lists. +- Space overhead: Since each element requires two additional pointers (one for the previous element and one for the next), `std::list` usually occupies more space than `std::vector`. +- Cache unfriendly: As the data is not stored continuously, `std::list` has a lower cache utilization rate. Generally, `std::vector` performs better. -!!! question "Does initializing a list `res = [0] * self.size()` result in each element of `res` referencing the same address?" +On the other hand, linked lists are primarily necessary for binary trees and graphs. Stacks and queues are often implemented using the programming language's `stack` and `queue` classes, rather than linked lists. - No. However, this issue arises with two-dimensional arrays, for example, initializing a two-dimensional list `res = [[0] * self.size()]` would reference the same list `[0]` multiple times. +**Q**: Does initializing a list `res = [0] * self.size()` result in each element of `res` referencing the same address? -!!! question "In deleting a node, is it necessary to break the reference to its successor node?" +No. However, this issue arises with two-dimensional arrays, for example, initializing a two-dimensional list `res = [[0] * self.size()]` would reference the same list `[0]` multiple times. - From the perspective of data structures and algorithms (problem-solving), it's okay not to break the link, as long as the program's logic is correct. From the perspective of standard libraries, breaking the link is safer and more logically clear. If the link is not broken, and the deleted node is not properly recycled, it could affect the recycling of the successor node's memory. +**Q**: In deleting a node, is it necessary to break the reference to its successor node? + +From the perspective of data structures and algorithms (problem-solving), it's okay not to break the link, as long as the program's logic is correct. From the perspective of standard libraries, breaking the link is safer and more logically clear. If the link is not broken, and the deleted node is not properly recycled, it could affect the recycling of the successor node's memory. diff --git a/docs-en/chapter_computational_complexity/iteration_and_recursion.md b/docs-en/chapter_computational_complexity/iteration_and_recursion.md index 8d2297f9f..3f8eddba7 100644 --- a/docs-en/chapter_computational_complexity/iteration_and_recursion.md +++ b/docs-en/chapter_computational_complexity/iteration_and_recursion.md @@ -184,8 +184,8 @@ The following function implements the sum $1 + 2 + \dots + n$ using a `for` loop ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > The flowchart below represents this sum function. @@ -395,8 +395,8 @@ Below we use a `while` loop to implement the sum $1 + 2 + \dots + n$: ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > **The `while` loop is more flexible than the `for` loop**. In a `while` loop, we can freely design the initialization and update steps of the condition variable. @@ -619,8 +619,8 @@ For example, in the following code, the condition variable $i$ is updated twice ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > Overall, **`for` loops are more concise, while `while` loops are more flexible**. Both can implement iterative structures. Which one to use should be determined based on the specific requirements of the problem. @@ -838,8 +838,8 @@ We can nest one loop structure within another. Below is an example using `for` l ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > The flowchart below represents this nested loop. @@ -1048,8 +1048,8 @@ Observe the following code, where calling the function `recur(n)` completes the ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > The Figure 2-3 shows the recursive process of this function. @@ -1249,8 +1249,8 @@ For example, in calculating $1 + 2 + \dots + n$, we can make the result variable ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > The execution process of tail recursion is shown in the following figure. Comparing regular recursion and tail recursion, the point of the summation operation is different. @@ -1462,8 +1462,8 @@ Using the recursive relation, and considering the first two numbers as terminati ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > Observing the above code, we see that it recursively calls two functions within itself, **meaning that one call generates two branching calls**. As illustrated below, this continuous recursive calling eventually creates a "recursion tree" with a depth of $n$. @@ -1785,8 +1785,8 @@ Therefore, **we can use an explicit stack to simulate the behavior of the call s ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > Observing the above code, when recursion is transformed into iteration, the code becomes more complex. Although iteration and recursion can often be transformed into each other, it's not always advisable to do so for two reasons: diff --git a/docs-en/chapter_computational_complexity/space_complexity.md b/docs-en/chapter_computational_complexity/space_complexity.md index 3d292ed69..19026a6ff 100644 --- a/docs-en/chapter_computational_complexity/space_complexity.md +++ b/docs-en/chapter_computational_complexity/space_complexity.md @@ -1064,8 +1064,8 @@ Note that memory occupied by initializing variables or calling functions in a lo ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > ### 2.   Linear Order $O(n)$ @@ -1333,8 +1333,8 @@ Linear order is common in arrays, linked lists, stacks, queues, etc., where the ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > As shown below, this function's recursive depth is $n$, meaning there are $n$ instances of unreturned `linear_recur()` function, using $O(n)$ size of stack frame space: @@ -1479,8 +1479,8 @@ As shown below, this function's recursive depth is $n$, meaning there are $n$ in ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > ![Recursive Function Generating Linear Order Space Complexity](space_complexity.assets/space_complexity_recursive_linear.png){ class="animation-figure" } @@ -1704,8 +1704,8 @@ Quadratic order is common in matrices and graphs, where the number of elements i ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > As shown below, the recursive depth of this function is $n$, and in each recursive call, an array is initialized with lengths $n$, $n-1$, $\dots$, $2$, $1$, averaging $n/2$, thus overall occupying $O(n^2)$ space: @@ -1868,8 +1868,8 @@ As shown below, the recursive depth of this function is $n$, and in each recursi ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > ![Recursive Function Generating Quadratic Order Space Complexity](space_complexity.assets/space_complexity_recursive_quadratic.png){ class="animation-figure" } @@ -2046,8 +2046,8 @@ Exponential order is common in binary trees. Observe the below image, a "full bi ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > ![Full Binary Tree Generating Exponential Order Space Complexity](space_complexity.assets/space_complexity_exponential.png){ class="animation-figure" } diff --git a/docs-en/chapter_computational_complexity/summary.md b/docs-en/chapter_computational_complexity/summary.md index 8436915fc..74f3d218e 100644 --- a/docs-en/chapter_computational_complexity/summary.md +++ b/docs-en/chapter_computational_complexity/summary.md @@ -30,24 +30,24 @@ comments: true ### 2.   Q & A -!!! question "Is the space complexity of tail recursion $O(1)$?" +**Q**: Is the space complexity of tail recursion $O(1)$? - Theoretically, the space complexity of a tail-recursive function can be optimized to $O(1)$. However, most programming languages (such as Java, Python, C++, Go, C#) do not support automatic optimization of tail recursion, so it's generally considered to have a space complexity of $O(n)$. +Theoretically, the space complexity of a tail-recursive function can be optimized to $O(1)$. However, most programming languages (such as Java, Python, C++, Go, C#) do not support automatic optimization of tail recursion, so it's generally considered to have a space complexity of $O(n)$. -!!! question "What is the difference between the terms 'function' and 'method'?" +**Q**: What is the difference between the terms "function" and "method"? - A "function" can be executed independently, with all parameters passed explicitly. A "method" is associated with an object and is implicitly passed to the object calling it, able to operate on the data contained within an instance of a class. +A "function" can be executed independently, with all parameters passed explicitly. A "method" is associated with an object and is implicitly passed to the object calling it, able to operate on the data contained within an instance of a class. - Here are some examples from common programming languages: +Here are some examples from common programming languages: - - C is a procedural programming language without object-oriented concepts, so it only has functions. However, we can simulate object-oriented programming by creating structures (struct), and functions associated with these structures are equivalent to methods in other programming languages. - - Java and C# are object-oriented programming languages where code blocks (methods) are typically part of a class. Static methods behave like functions because they are bound to the class and cannot access specific instance variables. - - C++ and Python support both procedural programming (functions) and object-oriented programming (methods). +- C is a procedural programming language without object-oriented concepts, so it only has functions. However, we can simulate object-oriented programming by creating structures (struct), and functions associated with these structures are equivalent to methods in other programming languages. +- Java and C# are object-oriented programming languages where code blocks (methods) are typically part of a class. Static methods behave like functions because they are bound to the class and cannot access specific instance variables. +- C++ and Python support both procedural programming (functions) and object-oriented programming (methods). -!!! question "Does the 'Common Types of Space Complexity' figure reflect the absolute size of occupied space?" +**Q**: Does the "Common Types of Space Complexity" figure reflect the absolute size of occupied space? - No, the figure shows space complexities, which reflect growth trends, not the absolute size of the occupied space. +No, the figure shows space complexities, which reflect growth trends, not the absolute size of the occupied space. - If you take $n = 8$, you might find that the values of each curve don't correspond to their functions. This is because each curve includes a constant term, intended to compress the value range into a visually comfortable range. +If you take $n = 8$, you might find that the values of each curve don't correspond to their functions. This is because each curve includes a constant term, intended to compress the value range into a visually comfortable range. - In practice, since we usually don't know the "constant term" complexity of each method, it's generally not possible to choose the best solution for $n = 8$ based solely on complexity. However, for $n = 8^5$, it's much easier to choose, as the growth trend becomes dominant. +In practice, since we usually don't know the "constant term" complexity of each method, it's generally not possible to choose the best solution for $n = 8$ based solely on complexity. However, for $n = 8^5$, it's much easier to choose, as the growth trend becomes dominant. diff --git a/docs-en/chapter_computational_complexity/time_complexity.md b/docs-en/chapter_computational_complexity/time_complexity.md index 78b43110c..f60e98f14 100644 --- a/docs-en/chapter_computational_complexity/time_complexity.md +++ b/docs-en/chapter_computational_complexity/time_complexity.md @@ -1121,8 +1121,8 @@ Constant order means the number of operations is independent of the input data s ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > ### 2.   Linear Order $O(n)$ @@ -1278,8 +1278,8 @@ Linear order indicates the number of operations grows linearly with the input da ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > Operations like array traversal and linked list traversal have a time complexity of $O(n)$, where $n$ is the length of the array or list: @@ -1451,8 +1451,8 @@ Operations like array traversal and linked list traversal have a time complexity ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > It's important to note that **the input data size $n$ should be determined based on the type of input data**. For example, in the first example, $n$ represents the input data size, while in the second example, the length of the array $n$ is the data size. @@ -1653,8 +1653,8 @@ Quadratic order means the number of operations grows quadratically with the inpu ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > The following image compares constant order, linear order, and quadratic order time complexities. @@ -1938,8 +1938,8 @@ For instance, in bubble sort, the outer loop runs $n - 1$ times, and the inner l ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > ### 4.   Exponential Order $O(2^n)$ @@ -2171,8 +2171,8 @@ The following image and code simulate the cell division process, with a time com ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > ![Exponential Order Time Complexity](time_complexity.assets/time_complexity_exponential.png){ class="animation-figure" } @@ -2311,8 +2311,8 @@ In practice, exponential order often appears in recursive functions. For example ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > Exponential order growth is extremely rapid and is commonly seen in exhaustive search methods (brute force, backtracking, etc.). For large-scale problems, exponential order is unacceptable, often requiring dynamic programming or greedy algorithms as solutions. @@ -2493,8 +2493,8 @@ The following image and code simulate the "halving each round" process, with a t ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > ![Logarithmic Order Time Complexity](time_complexity.assets/time_complexity_logarithmic.png){ class="animation-figure" } @@ -2633,8 +2633,8 @@ Like exponential order, logarithmic order also frequently appears in recursive f ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > Logarithmic order is typical in algorithms based on the divide-and-conquer strategy, embodying the "split into many" and "simplify complex problems" approach. It's slow-growing and is the most ideal time complexity after constant order. @@ -2831,8 +2831,8 @@ Linear-logarithmic order often appears in nested loops, with the complexities of ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > The image below demonstrates how linear-logarithmic order is generated. Each level of a binary tree has $n$ operations, and the tree has $\log_2 n + 1$ levels, resulting in a time complexity of $O(n \log n)$. @@ -3042,8 +3042,8 @@ Factorials are typically implemented using recursion. As shown in the image and ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > ![Factorial Order Time Complexity](time_complexity.assets/time_complexity_factorial.png){ class="animation-figure" } @@ -3409,8 +3409,8 @@ The "worst-case time complexity" corresponds to the asymptotic upper bound, deno ??? pythontutor "Visualizing Code" - - 全屏观看 > +
+ 全屏观看 > It's important to note that the best-case time complexity is rarely used in practice, as it is usually only achievable under very low probabilities and might be misleading. **The worst-case time complexity is more practical as it provides a safety value for efficiency**, allowing us to confidently use the algorithm. diff --git a/docs-en/chapter_data_structure/summary.md b/docs-en/chapter_data_structure/summary.md index 1eec400ed..58f1b338b 100644 --- a/docs-en/chapter_data_structure/summary.md +++ b/docs-en/chapter_data_structure/summary.md @@ -19,19 +19,19 @@ comments: true ### 2.   Q & A -!!! question "Why does a hash table contain both linear and non-linear data structures?" +**Q**: Why does a hash table contain both linear and non-linear data structures? - The underlying structure of a hash table is an array. To resolve hash collisions, we may use "chaining": each bucket in the array points to a linked list, which, when exceeding a certain threshold, might be transformed into a tree (usually a red-black tree). - From a storage perspective, the foundation of a hash table is an array, where each bucket slot might contain a value, a linked list, or a tree. Therefore, hash tables may contain both linear data structures (arrays, linked lists) and non-linear data structures (trees). +The underlying structure of a hash table is an array. To resolve hash collisions, we may use "chaining": each bucket in the array points to a linked list, which, when exceeding a certain threshold, might be transformed into a tree (usually a red-black tree). +From a storage perspective, the foundation of a hash table is an array, where each bucket slot might contain a value, a linked list, or a tree. Therefore, hash tables may contain both linear data structures (arrays, linked lists) and non-linear data structures (trees). -!!! question "Is the length of the `char` type 1 byte?" +**Q**: Is the length of the `char` type 1 byte? - The length of the `char` type is determined by the encoding method used by the programming language. For example, Java, JavaScript, TypeScript, and C# all use UTF-16 encoding (to save Unicode code points), so the length of the char type is 2 bytes. +The length of the `char` type is determined by the encoding method used by the programming language. For example, Java, JavaScript, TypeScript, and C# all use UTF-16 encoding (to save Unicode code points), so the length of the char type is 2 bytes. -!!! question "Is there ambiguity in calling data structures based on arrays 'static data structures'? Because operations like push and pop on stacks are 'dynamic.'" +**Q**: Is there ambiguity in calling data structures based on arrays "static data structures"? Because operations like push and pop on stacks are "dynamic". - While stacks indeed allow for dynamic data operations, the data structure itself remains "static" (with unchangeable length). Even though data structures based on arrays can dynamically add or remove elements, their capacity is fixed. If the data volume exceeds the pre-allocated size, a new, larger array needs to be created, and the contents of the old array copied into it. +While stacks indeed allow for dynamic data operations, the data structure itself remains "static" (with unchangeable length). Even though data structures based on arrays can dynamically add or remove elements, their capacity is fixed. If the data volume exceeds the pre-allocated size, a new, larger array needs to be created, and the contents of the old array copied into it. -!!! question "When building stacks (queues) without specifying their size, why are they considered 'static data structures'?" +**Q**: When building stacks (queues) without specifying their size, why are they considered "static data structures"? - In high-level programming languages, we don't need to manually specify the initial capacity of stacks (queues); this task is automatically handled internally by the class. For example, the initial capacity of Java's ArrayList is usually 10. Furthermore, the expansion operation is also implemented automatically. See the subsequent "List" chapter for details. +In high-level programming languages, we don't need to manually specify the initial capacity of stacks (queues); this task is automatically handled internally by the class. For example, the initial capacity of Java's ArrayList is usually 10. Furthermore, the expansion operation is also implemented automatically. See the subsequent "List" chapter for details. diff --git a/docs-en/chapter_stack_and_queue/deque.md b/docs-en/chapter_stack_and_queue/deque.md new file mode 100644 index 000000000..b844db9ad --- /dev/null +++ b/docs-en/chapter_stack_and_queue/deque.md @@ -0,0 +1,400 @@ +# Double-Ended Queue + +In a regular queue, we can only delete elements from the head or add elements to the tail. As shown in the figure below, a "double-ended queue (deque)" offers more flexibility, allowing the addition or removal of elements at both the head and the tail. + +![Operations in Double-Ended Queue](deque.assets/deque_operations.png) + +## Common Operations in Double-Ended Queue + +The common operations in a double-ended queue are listed below, and the specific method names depend on the programming language used. + +

Table   Efficiency of Double-Ended Queue Operations

+ +| Method Name | Description | Time Complexity | +| ------------- | --------------------------- | --------------- | +| `pushFirst()` | Add an element to the front | $O(1)$ | +| `pushLast()` | Add an element to the rear | $O(1)$ | +| `popFirst()` | Remove the front element | $O(1)$ | +| `popLast()` | Remove the rear element | $O(1)$ | +| `peekFirst()` | Access the front element | $O(1)$ | +| `peekLast()` | Access the rear element | $O(1)$ | + +Similarly, we can directly use the double-ended queue classes implemented in programming languages: + +=== "Python" + + ```python title="deque.py" + from collections import deque + + # Initialize the deque + deque: deque[int] = deque() + + # Enqueue elements + deque.append(2) # Add to the rear + deque.append(5) + deque.append(4) + deque.appendleft(3) # Add to the front + deque.appendleft(1) + + # Access elements + front: int = deque[0] # Front element + rear: int = deque[-1] # Rear element + + # Dequeue elements + pop_front: int = deque.popleft() # Front element dequeued + pop_rear: int = deque.pop() # Rear element dequeued + + # Get the length of the deque + size: int = len(deque) + + # Check if the deque is empty + is_empty: bool = len(deque) == 0 + ``` + +=== "C++" + + ```cpp title="deque.cpp" + /* Initialize the deque */ + deque deque; + + /* Enqueue elements */ + deque.push_back(2); // Add to the rear + deque.push_back(5); + deque.push_back(4); + deque.push_front(3); // Add to the front + deque.push_front(1); + + /* Access elements */ + int front = deque.front(); // Front element + int back = deque.back(); // Rear element + + /* Dequeue elements */ + deque.pop_front(); // Front element dequeued + deque.pop_back(); // Rear element dequeued + + /* Get the length of the deque */ + int size = deque.size(); + + /* Check if the deque is empty */ + bool empty = deque.empty(); + ``` + +=== "Java" + + ```java title="deque.java" + /* Initialize the deque */ + Deque deque = new LinkedList<>(); + + /* Enqueue elements */ + deque.offerLast(2); // Add to the rear + deque.offerLast(5); + deque.offerLast(4); + deque.offerFirst(3); // Add to the front + deque.offerFirst(1); + + /* Access elements */ + int peekFirst = deque.peekFirst(); // Front element + int peekLast = deque.peekLast(); // Rear element + + /* Dequeue elements */ + int popFirst = deque.pollFirst(); // Front element dequeued + int popLast = deque.pollLast(); // Rear element dequeued + + /* Get the length of the deque */ + int size = deque.size(); + + /* Check if the deque is empty */ + boolean isEmpty = deque.isEmpty(); + ``` + +=== "C#" + + ```csharp title="deque.cs" + /* Initialize the deque */ + // In C#, LinkedList is used as a deque + LinkedList deque = new(); + + /* Enqueue elements */ + deque.AddLast(2); // Add to the rear + deque.AddLast(5); + deque.AddLast(4); + deque.AddFirst(3); // Add to the front + deque.AddFirst(1); + + /* Access elements */ + int peekFirst = deque.First.Value; // Front element + int peekLast = deque.Last.Value; // Rear element + + /* Dequeue elements */ + deque.RemoveFirst(); // Front element dequeued + deque.RemoveLast(); // Rear element dequeued + + /* Get the length of the deque */ + int size = deque.Count; + + /* Check if the deque is empty */ + bool isEmpty = deque.Count == 0; + ``` + +=== "Go" + + ```go title="deque_test.go" + /* Initialize the deque */ + // In Go, use list as a deque + deque := list.New() + + /* Enqueue elements */ + deque.PushBack(2) // Add to the rear + deque.PushBack(5) + deque.PushBack(4) + deque.PushFront(3) // Add to the front + deque.PushFront(1) + + /* Access elements */ + front := deque.Front() // Front element + rear := deque.Back() // Rear element + + /* Dequeue elements */ + deque.Remove(front) // Front element dequeued + deque.Remove(rear) // Rear element dequeued + + /* Get the length of the deque */ + size := deque.Len() + + /* Check if the deque is empty */ + isEmpty := deque.Len() == 0 + ``` + +=== "Swift" + + ```swift title="deque.swift" + /* Initialize the deque */ + // Swift does not have a built-in deque class, so Array can be used as a deque + var deque: [Int] = [] + + /* Enqueue elements */ + deque.append(2) // Add to the rear + deque.append(5) + deque.append(4) + deque.insert(3, at: 0) // Add to the front + deque.insert(1, at: 0) + + /* Access elements */ + let peekFirst = deque.first! // Front element + let peekLast = deque.last! // Rear element + + /* Dequeue elements */ + // Using Array, popFirst has a complexity of O(n) + let popFirst = deque.removeFirst() // Front element dequeued + let popLast = deque.removeLast() // Rear element dequeued + + /* Get the length of the deque */ + let size = deque.count + + /* Check if the deque is empty */ + let isEmpty = deque.isEmpty + ``` + +=== "JS" + + ```javascript title="deque.js" + /* Initialize the deque */ + // JavaScript does not have a built-in deque, so Array is used as a deque + const deque = []; + + /* Enqueue elements */ + deque.push(2); + deque.push(5); + deque.push(4); + // Note that unshift() has a time complexity of O(n) as it's an array + deque.unshift(3); + deque.unshift(1); + + /* Access elements */ + const peekFirst = deque[0]; // Front element + const peekLast = deque[deque.length - 1]; // Rear element + + /* Dequeue elements */ + // Note that shift() has a time complexity of O(n) as it's an array + const popFront = deque.shift(); // Front element dequeued + const popBack = deque.pop(); // Rear element dequeued + + /* Get the length of the deque */ + const size = deque.length; + + /* Check if the deque is empty */ + const isEmpty = size === 0; + ``` + +=== "TS" + + ```typescript title="deque.ts" + /* Initialize the deque */ + // TypeScript does not have a built-in deque, so Array is used as a deque + const deque: number[] = []; + + /* Enqueue elements */ + deque.push(2); + deque.push(5); + deque.push(4); + // Note that unshift() has a time complexity of O(n) as it's an array + deque.unshift(3); + deque.unshift(1); + + /* Access elements */ + const peekFirst: number = deque[0]; // Front element + const peekLast: number = deque[deque.length - 1]; // Rear element + + /* Dequeue elements */ + // Note that shift() has a time complexity of O(n) as it's an array + const popFront: number = deque.shift() as number; // Front element dequeued + const popBack: number = deque.pop() as number; // Rear element dequeued + + /* Get the length of the deque */ + const size: number = deque.length; + + /* Check if the deque is empty */ + const isEmpty: boolean = size === 0; + ``` + +=== "Dart" + + ```dart title="deque.dart" + /* Initialize the deque */ + // In Dart, Queue is defined as a deque + Queue deque = Queue(); + + /* Enqueue elements */ + deque.addLast(2); // Add to the rear + deque.addLast(5); + deque.addLast(4); + deque.addFirst(3); // Add to the front + deque.addFirst(1); + + /* Access elements */ + int peekFirst = deque.first; // Front element + int peekLast = deque.last; // Rear element + + /* Dequeue elements */ + int popFirst = deque.removeFirst(); // Front element dequeued + int popLast = deque.removeLast(); // Rear element dequeued + + /* Get the length of the deque */ + int size = deque.length; + + /* Check if the deque is empty */ + bool isEmpty = deque.isEmpty; + ``` + +=== "Rust" + + ```rust title="deque.rs" + /* Initialize the deque */ + let mut deque: VecDeque = VecDeque::new(); + + /* Enqueue elements */ + deque.push_back(2); // Add to the rear + deque.push_back(5); + deque.push_back(4); + deque.push_front(3); // Add to the front + deque.push_front(1); + + /* Access elements */ + if let Some(front) = deque.front() { // Front element + } + if let Some(rear) = deque.back() { // Rear element + } + + /* Dequeue elements */ + if let Some(pop_front) = deque.pop_front() { // Front element dequeued + } + if let Some(pop_rear) = deque.pop_back() { // Rear element dequeued + } + + /* Get the length of the deque */ + let size = deque.len(); + + /* Check if the deque is empty */ + let is_empty = deque.is_empty(); + ``` + +=== "C" + + ```c title="deque.c" + // C does not provide a built-in deque + ``` + +=== "Zig" + + ```zig title="deque.zig" + + ``` + +??? pythontutor "可视化运行" + +
+ 全屏观看 > + +## Implementing a Double-Ended Queue * + +The implementation of a double-ended queue is similar to that of a regular queue, with the choice of either linked lists or arrays as the underlying data structure. + +### Implementation Based on Doubly Linked List + +Recall from the previous section that we used a regular singly linked list to implement a queue, as it conveniently allows for deleting the head node (corresponding to dequeue operation) and adding new nodes after the tail node (corresponding to enqueue operation). + +For a double-ended queue, both the head and the tail can perform enqueue and dequeue operations. In other words, a double-ended queue needs to implement another symmetric direction of operations. For this, we use a "doubly linked list" as the underlying data structure of the double-ended queue. + +As shown in the figure below, we treat the head and tail nodes of the doubly linked list as the front and rear of the double-ended queue, respectively, and implement the functionality to add and remove nodes at both ends. + +=== "LinkedListDeque" + ![Implementing Double-Ended Queue with Doubly Linked List for Enqueue and Dequeue Operations](deque.assets/linkedlist_deque.png) + +=== "pushLast()" + ![linkedlist_deque_push_last](deque.assets/linkedlist_deque_push_last.png) + +=== "pushFirst()" + ![linkedlist_deque_push_first](deque.assets/linkedlist_deque_push_first.png) + +=== "popLast()" + ![linkedlist_deque_pop_last](deque.assets/linkedlist_deque_pop_last.png) + +=== "popFirst()" + ![linkedlist_deque_pop_first](deque.assets/linkedlist_deque_pop_first.png) + +The implementation code is as follows: + +```src +[file]{linkedlist_deque}-[class]{linked_list_deque}-[func]{} +``` + +### Implementation Based on Array + +As shown in the figure below, similar to implementing a queue with an array, we can also use a circular array to implement a double-ended queue. + +=== "ArrayDeque" + ![Implementing Double-Ended Queue with Array for Enqueue and Dequeue Operations](deque.assets/array_deque.png) + +=== "pushLast()" + ![array_deque_push_last](deque.assets/array_deque_push_last.png) + +=== "pushFirst()" + ![array_deque_push_first](deque.assets/array_deque_push_first.png) + +=== "popLast()" + ![array_deque_pop_last](deque.assets/array_deque_pop_last.png) + +=== "popFirst()" + ![array_deque_pop_first](deque.assets/array_deque_pop_first.png) + +The implementation only needs to add methods for "front enqueue" and "rear dequeue": + +```src +[file]{array_deque}-[func]{} +``` + +## Applications of Double-Ended Queue + +The double-ended queue combines the logic of both stacks and queues, **thus it can implement all the application scenarios of these two, while offering greater flexibility**. + +We know that the "undo" feature in software is typically implemented using a stack: the system `pushes` each change operation onto the stack, and then `pops` to implement undoing. However, considering the limitations of system resources, software often restricts the number of undo steps (for example, only allowing the last 50 steps). When the length of the stack exceeds 50, the software needs to perform a deletion operation at the bottom of the stack (the front of the queue). **But a regular stack cannot perform this function, which is where a double-ended queue becomes necessary**. Note that the core logic of "undo" still follows the Last-In-First-Out principle of a stack, but a double-ended queue can more flexibly implement some additional logic. diff --git a/docs-en/chapter_stack_and_queue/index.md b/docs-en/chapter_stack_and_queue/index.md new file mode 100644 index 000000000..d6586aac8 --- /dev/null +++ b/docs-en/chapter_stack_and_queue/index.md @@ -0,0 +1,13 @@ +# Stack and Queue + +
+ +![Stack and Queue](../assets/covers/chapter_stack_and_queue.jpg) + +
+ +!!! abstract + + Stacks are like stacking cats, while queues are like cats lining up. + + They respectively represent the logical relationships of Last-In-First-Out (LIFO) and First-In-First-Out (FIFO). diff --git a/docs-en/chapter_stack_and_queue/queue.md b/docs-en/chapter_stack_and_queue/queue.md new file mode 100755 index 000000000..b491e632c --- /dev/null +++ b/docs-en/chapter_stack_and_queue/queue.md @@ -0,0 +1,376 @@ +# Queue + +"Queue" is a linear data structure that follows the First-In-First-Out (FIFO) rule. As the name suggests, a queue simulates the phenomenon of lining up, where newcomers join the back of the queue, and people at the front of the queue leave one by one. + +As shown in the figure below, we call the front of the queue the "head" and the back the "tail." The operation of adding elements to the tail of the queue is termed "enqueue," and the operation of removing elements from the head is termed "dequeue." + +![Queue's First-In-First-Out Rule](queue.assets/queue_operations.png) + +## Common Operations on Queue + +The common operations on a queue are shown in the table below. Note that method names may vary across different programming languages. Here, we adopt the same naming convention as used for stacks. + +

Table   Efficiency of Queue Operations

+ +| Method Name | Description | Time Complexity | +| ----------- | -------------------------------------- | --------------- | +| `push()` | Enqueue an element, add it to the tail | $O(1)$ | +| `pop()` | Dequeue the head element | $O(1)$ | +| `peek()` | Access the head element | $O(1)$ | + +We can directly use the ready-made queue classes in programming languages: + +=== "Python" + + ```python title="queue.py" + from collections import deque + + # Initialize the queue + # In Python, we generally use the deque class as a queue + # Although queue.Queue() is a pure queue class, it's not very user-friendly, so it's not recommended + que: deque[int] = deque() + + # Enqueue elements + que.append(1) + que.append(3) + que.append(2) + que.append(5) + que.append(4) + + # Access the front element + front: int = que[0] + + # Dequeue an element + pop: int = que.popleft() + + # Get the length of the queue + size: int = len(que) + + # Check if the queue is empty + is_empty: bool = len(que) == 0 + ``` + +=== "C++" + + ```cpp title="queue.cpp" + /* Initialize the queue */ + queue queue; + + /* Enqueue elements */ + queue.push(1); + queue.push(3); + queue.push(2); + queue.push(5); + queue.push(4); + + /* Access the front element */ + int front = queue.front(); + + /* Dequeue an element */ + queue.pop(); + + /* Get the length of the queue */ + int size = queue.size(); + + /* Check if the queue is empty */ + bool empty = queue.empty(); + ``` + +=== "Java" + + ```java title="queue.java" + /* Initialize the queue */ + Queue queue = new LinkedList<>(); + + /* Enqueue elements */ + queue.offer(1); + queue.offer(3); + queue.offer(2); + queue.offer(5); + queue.offer(4); + + /* Access the front element */ + int peek = queue.peek(); + + /* Dequeue an element */ + int pop = queue.poll(); + + /* Get the length of the queue */ + int size = queue.size(); + + /* Check if the queue is empty */ + boolean isEmpty = queue.isEmpty(); + ``` + +=== "C#" + + ```csharp title="queue.cs" + /* Initialize the queue */ + Queue queue = new(); + + /* Enqueue elements */ + queue.Enqueue(1); + queue.Enqueue(3); + queue.Enqueue(2); + queue.Enqueue(5); + queue.Enqueue(4); + + /* Access the front element */ + int peek = queue.Peek(); + + /* Dequeue an element */ + int pop = queue.Dequeue(); + + /* Get the length of the queue */ + int size = queue.Count; + + /* Check if the queue is empty */ + bool isEmpty = queue.Count == 0; + ``` + +=== "Go" + + ```go title="queue_test.go" + /* Initialize the queue */ + // In Go, use list as a queue + queue := list.New() + + /* Enqueue elements */ + queue.PushBack(1) + queue.PushBack(3) + queue.PushBack(2) + queue.PushBack(5) + queue.PushBack(4) + + /* Access the front element */ + peek := queue.Front() + + /* Dequeue an element */ + pop := queue.Front() + queue.Remove(pop) + + /* Get the length of the queue */ + size := queue.Len() + + /* Check if the queue is empty */ + isEmpty := queue.Len() == 0 + ``` + +=== "Swift" + + ```swift title="queue.swift" + /* Initialize the queue */ + // Swift does not have a built-in queue class, so Array can be used as a queue + var queue: [Int] = [] + + /* Enqueue elements */ + queue.append(1) + queue.append(3) + queue.append(2) + queue.append(5) + queue.append(4) + + /* Access the front element */ + let peek = queue.first! + + /* Dequeue an element */ + // Since it's an array, removeFirst has a complexity of O(n) + let pool = queue.removeFirst() + + /* Get the length of the queue */ + let size = queue.count + + /* Check if the queue is empty */ + let isEmpty = queue.isEmpty + ``` + +=== "JS" + + ```javascript title="queue.js" + /* Initialize the queue */ + // JavaScript does not have a built-in queue, so Array can be used as a queue + const queue = []; + + /* Enqueue elements */ + queue.push(1); + queue.push(3); + queue.push(2); + queue.push(5); + queue.push(4); + + /* Access the front element */ + const peek = queue[0]; + + /* Dequeue an element */ + // Since the underlying structure is an array, shift() method has a time complexity of O(n) + const pop = queue.shift(); + + /* Get the length of the queue */ + const size = queue.length; + + /* Check if the queue is empty */ + const empty = queue.length === 0; + ``` + +=== "TS" + + ```typescript title="queue.ts" + /* Initialize the queue */ + // TypeScript does not have a built-in queue, so Array can be used as a queue + const queue: number[] = []; + + /* Enqueue elements */ + queue.push(1); + queue.push(3); + queue.push(2); + queue.push(5); + queue.push(4); + + /* Access the front element */ + const peek = queue[0]; + + /* Dequeue an element */ + // Since the underlying structure is an array, shift() method has a time complexity of O(n) + const pop = queue.shift(); + + /* Get the length of the queue */ + const size = queue.length; + + /* Check if the queue is empty */ + const empty = queue.length === 0; + ``` + +=== "Dart" + + ```dart title="queue.dart" + /* Initialize the queue */ + // In Dart, the Queue class is a double-ended queue but can be used as a queue + Queue queue = Queue(); + + /* Enqueue elements */ + queue.add(1); + queue.add(3); + queue.add(2); + queue.add(5); + queue.add(4); + + /* Access the front element */ + int peek = queue.first; + + /* Dequeue an element */ + int pop = queue.removeFirst(); + + /* Get the length of the queue */ + int size = queue.length; + + /* Check if the queue is empty */ + bool isEmpty = queue.isEmpty; + ``` + +=== "Rust" + + ```rust title="queue.rs" + /* Initialize the double-ended queue */ + // In Rust, use a double-ended queue as a regular queue + let mut deque: VecDeque = VecDeque::new(); + + /* Enqueue elements */ + deque.push_back(1); + deque.push_back(3); + deque.push_back(2); + deque.push_back(5); + deque.push_back(4); + + /* Access the front element */ + if let Some(front) = deque.front() { + } + + /* Dequeue an element */ + if let Some(pop) = deque.pop_front() { + } + + /* Get the length of the queue */ + let size = deque.len(); + + /* Check if the queue is empty */ + let is_empty = deque.is_empty(); + ``` + +=== "C" + + ```c title="queue.c" + // C does not provide a built-in queue + ``` + +=== "Zig" + + ```zig title="queue.zig" + + ``` + +??? pythontutor "可视化运行" + +
+ 全屏观看 > + +## Implementing a Queue + +To implement a queue, we need a data structure that allows adding elements at one end and removing them at the other. Both linked lists and arrays meet this requirement. + +### Implementation Based on Linked List + +As shown in the figure below, we can consider the "head node" and "tail node" of a linked list as the "head" and "tail" of the queue, respectively. We restrict the operations so that nodes can only be added at the tail and removed at the head. + +=== "LinkedListQueue" + ![Implementing Queue with Linked List for Enqueue and Dequeue Operations](queue.assets/linkedlist_queue.png) + +=== "push()" + ![linkedlist_queue_push](queue.assets/linkedlist_queue_push.png) + +=== "pop()" + ![linkedlist_queue_pop](queue.assets/linkedlist_queue_pop.png) + +Below is the code for implementing a queue using a linked list: + +```src +[file]{linkedlist_queue}-[class]{linked_list_queue}-[func]{} +``` + +### Implementation Based on Array + +Deleting the first element in an array has a time complexity of $O(n)$, which would make the dequeue operation inefficient. However, this problem can be cleverly avoided as follows. + +We can use a variable `front` to point to the index of the head element and maintain a `size` variable to record the length of the queue. Define `rear = front + size`, which points to the position right after the tail element. + +With this design, **the effective interval of elements in the array is `[front, rear - 1]`**. The implementation methods for various operations are shown in the figure below. + +- Enqueue operation: Assign the input element to the `rear` index and increase `size` by 1. +- Dequeue operation: Simply increase `front` by 1 and decrease `size` by 1. + +Both enqueue and dequeue operations only require a single operation, each with a time complexity of $O(1)$. + +=== "ArrayQueue" + ![Implementing Queue with Array for Enqueue and Dequeue Operations](queue.assets/array_queue.png) + +=== "push()" + ![array_queue_push](queue.assets/array_queue_push.png) + +=== "pop()" + ![array_queue_pop](queue.assets/array_queue_pop.png) + +You might notice a problem: as enqueue and dequeue operations are continuously performed, both `front` and `rear` move to the right and **will eventually reach the end of the array and can't move further**. To resolve this issue, we can treat the array as a "circular array." + +For a circular array, `front` or `rear` needs to loop back to the start of the array upon reaching the end. This cyclical pattern can be achieved with a "modulo operation," as shown in the code below: + +```src +[file]{array_queue}-[class]{array_queue}-[func]{} +``` + +The above implementation of the queue still has limitations: its length is fixed. However, this issue is not difficult to resolve. We can replace the array with a dynamic array to introduce an expansion mechanism. Interested readers can try to implement this themselves. + +The comparison of the two implementations is consistent with that of the stack and is not repeated here. + +## Typical Applications of Queue + +- **Amazon Orders**. After shoppers place orders, these orders join a queue, and the system processes them in order. During events like Singles' Day, a massive number of orders are generated in a short time, making high concurrency a key challenge for engineers. +- **Various To-Do Lists**. Any scenario requiring a "first-come, first-served" functionality, such as a printer's task queue or a restaurant's food delivery queue, can effectively maintain the order of processing with a queue. diff --git a/docs-en/chapter_stack_and_queue/stack.md b/docs-en/chapter_stack_and_queue/stack.md new file mode 100755 index 000000000..08a6460f5 --- /dev/null +++ b/docs-en/chapter_stack_and_queue/stack.md @@ -0,0 +1,384 @@ +# Stack + +"Stack" is a linear data structure that follows the principle of Last-In-First-Out (LIFO). + +We can compare a stack to a pile of plates on a table. To access the bottom plate, one must remove the plates on top. If we replace the plates with various types of elements (such as integers, characters, objects, etc.), we obtain the data structure known as a stack. + +As shown in the following figure, we refer to the top of the pile of elements as the "top of the stack" and the bottom as the "bottom of the stack." The operation of adding elements to the top of the stack is called "push," and the operation of removing the top element is called "pop." + +![Stack's Last-In-First-Out Rule](stack.assets/stack_operations.png) + +## Common Operations on Stack + +The common operations on a stack are shown in the table below. The specific method names depend on the programming language used. Here, we use `push()`, `pop()`, and `peek()` as examples. + +

Table   Efficiency of Stack Operations

+ +| Method | Description | Time Complexity | +| -------- | ----------------------------------------------- | --------------- | +| `push()` | Push an element onto the stack (add to the top) | $O(1)$ | +| `pop()` | Pop the top element from the stack | $O(1)$ | +| `peek()` | Access the top element of the stack | $O(1)$ | + +Typically, we can directly use the stack class built into the programming language. However, some languages may not specifically provide a stack class. In these cases, we can use the language's "array" or "linked list" as a stack and ignore operations that are not related to stack logic in the program. + +=== "Python" + + ```python title="stack.py" + # Initialize the stack + # Python does not have a built-in stack class, so a list can be used as a stack + stack: list[int] = [] + + # Push elements onto the stack + stack.append(1) + stack.append(3) + stack.append(2) + stack.append(5) + stack.append(4) + + # Access the top element of the stack + peek: int = stack[-1] + + # Pop an element from the stack + pop: int = stack.pop() + + # Get the length of the stack + size: int = len(stack) + + # Check if the stack is empty + is_empty: bool = len(stack) == 0 + ``` + +=== "C++" + + ```cpp title="stack.cpp" + /* Initialize the stack */ + stack stack; + + /* Push elements onto the stack */ + stack.push(1); + stack.push(3); + stack.push(2); + stack.push(5); + stack.push(4); + + /* Access the top element of the stack */ + int top = stack.top(); + + /* Pop an element from the stack */ + stack.pop(); // No return value + + /* Get the length of the stack */ + int size = stack.size(); + + /* Check if the stack is empty */ + bool empty = stack.empty(); + ``` + +=== "Java" + + ```java title="stack.java" + /* Initialize the stack */ + Stack stack = new Stack<>(); + + /* Push elements onto the stack */ + stack.push(1); + stack.push(3); + stack.push(2); + stack.push(5); + stack.push(4); + + /* Access the top element of the stack */ + int peek = stack.peek(); + + /* Pop an element from the stack */ + int pop = stack.pop(); + + /* Get the length of the stack */ + int size = stack.size(); + + /* Check if the stack is empty */ + boolean isEmpty = stack.isEmpty(); + ``` + +=== "C#" + + ```csharp title="stack.cs" + /* Initialize the stack */ + Stack stack = new(); + + /* Push elements onto the stack */ + stack.Push(1); + stack.Push(3); + stack.Push(2); + stack.Push(5); + stack.Push(4); + + /* Access the top element of the stack */ + int peek = stack.Peek(); + + /* Pop an element from the stack */ + int pop = stack.Pop(); + + /* Get the length of the stack */ + int size = stack.Count; + + /* Check if the stack is empty */ + bool isEmpty = stack.Count == 0; + ``` + +=== "Go" + + ```go title="stack_test.go" + /* Initialize the stack */ + // In Go, it is recommended to use a Slice as a stack + var stack []int + + /* Push elements onto the stack */ + stack = append(stack, 1) + stack = append(stack, 3) + stack = append(stack, 2) + stack = append(stack, 5) + stack = append(stack, 4) + + /* Access the top element of the stack */ + peek := stack[len(stack)-1] + + /* Pop an element from the stack */ + pop := stack[len(stack)-1] + stack = stack[:len(stack)-1] + + /* Get the length of the stack */ + size := len(stack) + + /* Check if the stack is empty */ + isEmpty := len(stack) == 0 + ``` + +=== "Swift" + + ```swift title="stack.swift" + /* Initialize the stack */ + // Swift does not have a built-in stack class, so Array can be used as a stack + var stack: [Int] = [] + + /* Push elements onto the stack */ + stack.append(1) + stack.append(3) + stack.append(2) + stack.append(5) + stack.append(4) + + /* Access the top element of the stack */ + let peek = stack.last! + + /* Pop an element from the stack */ + let pop = stack.removeLast() + + /* Get the length of the stack */ + let size = stack.count + + /* Check if the stack is empty */ + let isEmpty = stack.isEmpty + ``` + +=== "JS" + + ```javascript title="stack.js" + /* Initialize the stack */ + // JavaScript does not have a built-in stack class, so Array can be used as a stack + const stack = []; + + /* Push elements onto the stack */ + stack.push(1); + stack.push(3); + stack.push(2); + stack.push(5); + stack.push(4); + + /* Access the top element of the stack */ + const peek = stack[stack.length-1]; + + /* Pop an element from the stack */ + const pop = stack.pop(); + + /* Get the length of the stack */ + const size = stack.length; + + /* Check if the stack is empty */ + const is_empty = stack.length === 0; + ``` + +=== "TS" + + ```typescript title="stack.ts" + /* Initialize the stack */ + // TypeScript does not have a built-in stack class, so Array can be used as a stack + const stack: number[] = []; + + /* Push elements onto the stack */ + stack.push(1); + stack.push(3); + stack.push(2); + stack.push(5); + stack.push(4); + + /* Access the top element of the stack */ + const peek = stack[stack.length - 1]; + + /* Pop an element from the stack */ + const pop = stack.pop(); + + /* Get the length of the stack */ + const size = stack.length; + + /* Check if the stack is empty */ + const is_empty = stack.length === 0; + ``` + +=== "Dart" + + ```dart title="stack.dart" + /* Initialize the stack */ + // Dart does not have a built-in stack class, so List can be used as a stack + List stack = []; + + /* Push elements onto the stack */ + stack.add(1); + stack.add(3); + stack.add(2); + stack.add(5); + stack.add(4); + + /* Access the top element of the stack */ + int peek = stack.last; + + /* Pop an element from the stack */ + int pop = stack.removeLast(); + + /* Get the length of the stack */ + int size = stack.length; + + /* Check if the stack is empty */ + bool isEmpty = stack.isEmpty; + ``` + +=== "Rust" + + ```rust title="stack.rs" + /* Initialize the stack */ + // Use Vec as a stack + let mut stack: Vec = Vec::new(); + + /* Push elements onto the stack */ + stack.push(1); + stack.push(3); + stack.push(2); + stack.push(5); + stack.push(4); + + /* Access the top element of the stack */ + let top = stack.last().unwrap(); + + /* Pop an element from the stack */ + let pop = stack.pop().unwrap(); + + /* Get the length of the stack */ + let size = stack.len(); + + /* Check if the stack is empty */ + let is_empty = stack.is_empty(); + ``` + +=== "C" + + ```c title="stack.c" + // C does not provide a built-in stack + ``` + +=== "Zig" + + ```zig title="stack.zig" + + ``` + +??? pythontutor "可视化运行" + +
+ 全屏观看 > + +## Implementing a Stack + +To understand the mechanics of a stack more deeply, let's try implementing a stack class ourselves. + +A stack follows the principle of Last-In-First-Out, which means we can only add or remove elements at the top of the stack. However, both arrays and linked lists allow adding and removing elements at any position, **therefore a stack can be seen as a restricted array or linked list**. In other words, we can "mask" some unrelated operations of arrays or linked lists to make their logic conform to the characteristics of a stack. + +### Implementation Based on Linked List + +When implementing a stack using a linked list, we can consider the head node of the list as the top of the stack and the tail node as the bottom of the stack. + +As shown in the figure below, for the push operation, we simply insert elements at the head of the linked list. This method of node insertion is known as "head insertion." For the pop operation, we just need to remove the head node from the list. + +=== "LinkedListStack" + ![Implementing Stack with Linked List for Push and Pop Operations](stack.assets/linkedlist_stack.png) + +=== "push()" + ![linkedlist_stack_push](stack.assets/linkedlist_stack_push.png) + +=== "pop()" + ![linkedlist_stack_pop](stack.assets/linkedlist_stack_pop.png) + +Below is an example code for implementing a stack based on a linked list: + +```src +[file]{linkedlist_stack}-[class]{linked_list_stack}-[func]{} +``` + +### Implementation Based on Array + +When implementing a stack using an array, we can consider the end of the array as the top of the stack. As shown in the figure below, push and pop operations correspond to adding and removing elements at the end of the array, respectively, both with a time complexity of $O(1)$. + +=== "ArrayStack" + ![Implementing Stack with Array for Push and Pop Operations](stack.assets/array_stack.png) + +=== "push()" + ![array_stack_push](stack.assets/array_stack_push.png) + +=== "pop()" + ![array_stack_pop](stack.assets/array_stack_pop.png) + +Since the elements to be pushed onto the stack may continuously increase, we can use a dynamic array, thus avoiding the need to handle array expansion ourselves. Here is an example code: + +```src +[file]{array_stack}-[class]{array_stack}-[func]{} +``` + +## Comparison of the Two Implementations + +**Supported Operations** + +Both implementations support all the operations defined in a stack. The array implementation additionally supports random access, but this is beyond the scope of a stack definition and is generally not used. + +**Time Efficiency** + +In the array-based implementation, both push and pop operations occur in pre-allocated continuous memory, which has good cache locality and therefore higher efficiency. However, if the push operation exceeds the array capacity, it triggers a resizing mechanism, making the time complexity of that push operation $O(n)$. + +In the linked list implementation, list expansion is very flexible, and there is no efficiency decrease issue as in array expansion. However, the push operation requires initializing a node object and modifying pointers, so its efficiency is relatively lower. If the elements being pushed are already node objects, then the initialization step can be skipped, improving efficiency. + +Thus, when the elements for push and pop operations are basic data types like `int` or `double`, we can draw the following conclusions: + +- The array-based stack implementation's efficiency decreases during expansion, but since expansion is a low-frequency operation, its average efficiency is higher. +- The linked list-based stack implementation provides more stable efficiency performance. + +**Space Efficiency** + +When initializing a list, the system allocates an "initial capacity," which might exceed the actual need; moreover, the expansion mechanism usually increases capacity by a specific factor (like doubling), which may also exceed the actual need. Therefore, **the array-based stack might waste some space**. + +However, since linked list nodes require extra space for storing pointers, **the space occupied by linked list nodes is relatively larger**. + +In summary, we cannot simply determine which implementation is more memory-efficient. It requires analysis based on specific circumstances. + +## Typical Applications of Stack + +- **Back and forward in browsers, undo and redo in software**. Every time we open a new webpage, the browser pushes the previous page onto the stack, allowing us to go back to the previous page through the back operation, which is essentially a pop operation. To support both back and forward, two stacks are needed to work together. +- **Memory management in programs**. Each time a function is called, the system adds a stack frame at the top of the stack to record the function's context information. In recursive functions, the downward recursion phase keeps pushing onto the stack, while the upward backtracking phase keeps popping from the stack. diff --git a/docs-en/chapter_stack_and_queue/summary.md b/docs-en/chapter_stack_and_queue/summary.md new file mode 100644 index 000000000..706434032 --- /dev/null +++ b/docs-en/chapter_stack_and_queue/summary.md @@ -0,0 +1,31 @@ +# Summary + +### Key Review + +- A stack is a data structure that follows the Last-In-First-Out (LIFO) principle and can be implemented using either arrays or linked lists. +- In terms of time efficiency, the array implementation of a stack has higher average efficiency, but during expansion, the time complexity for a single push operation can degrade to $O(n)$. In contrast, the linked list implementation of a stack offers more stable efficiency. +- Regarding space efficiency, the array implementation of a stack may lead to some level of space wastage. However, it's important to note that the memory space occupied by nodes in a linked list is generally larger than that for elements in an array. +- A queue is a data structure that follows the First-In-First-Out (FIFO) principle, and it can also be implemented using either arrays or linked lists. The conclusions regarding time and space efficiency for queues are similar to those for stacks. +- A double-ended queue is a more flexible type of queue that allows adding and removing elements from both ends. + +### Q & A + +**Q**: Is the browser's forward and backward functionality implemented with a doubly linked list? + +The forward and backward functionality of a browser fundamentally represents the "stack" concept. When a user visits a new page, it is added to the top of the stack; when they click the back button, the page is popped from the top. A double-ended queue can conveniently implement some additional operations, as mentioned in the "Double-Ended Queue" section. + +**Q**: After popping from a stack, is it necessary to free the memory of the popped node? + +If the popped node will still be used later, it's not necessary to free its memory. In languages like Java and Python that have automatic garbage collection, manual memory release isn't required; in C and C++, manual memory release is necessary if the node will no longer be used. + +**Q**: A double-ended queue seems like two stacks joined together. What are its uses? + +A double-ended queue is essentially a combination of a stack and a queue, or like two stacks joined together. It exhibits both stack and queue logic, therefore enabling the implementation of all applications of stacks and queues with added flexibility. + +**Q**: How exactly are undo and redo implemented? + +Undo and redo are implemented using two stacks: Stack A for undo and Stack B for redo. + +1. Each time a user performs an operation, it is pushed onto Stack A, and Stack B is cleared. +2. When the user executes an "undo", the most recent operation is popped from Stack A and pushed onto Stack B. +3. When the user executes a "redo", the most recent operation is popped from Stack B and pushed back onto Stack A. diff --git a/docs/chapter_array_and_linkedlist/array.md b/docs/chapter_array_and_linkedlist/array.md index bbf2bdce2..7f7a6bd2d 100755 --- a/docs/chapter_array_and_linkedlist/array.md +++ b/docs/chapter_array_and_linkedlist/array.md @@ -121,7 +121,8 @@ comments: true ??? pythontutor "可视化运行" - +
+ 全屏观看 > ### 2.   访问元素 @@ -293,8 +294,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ### 3.   插入元素 @@ -475,8 +476,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ### 4.   删除元素 @@ -634,8 +635,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 总的来看,数组的插入与删除操作有以下缺点。 @@ -858,8 +859,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ### 6.   查找元素 @@ -1026,8 +1027,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ### 7.   扩容数组 @@ -1236,8 +1237,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ## 4.1.2   数组的优点与局限性 diff --git a/docs/chapter_array_and_linkedlist/linked_list.md b/docs/chapter_array_and_linkedlist/linked_list.md index cb28b1371..8a64cd9c3 100755 --- a/docs/chapter_array_and_linkedlist/linked_list.md +++ b/docs/chapter_array_and_linkedlist/linked_list.md @@ -398,7 +398,8 @@ comments: true ??? pythontutor "可视化运行" - +
+ 全屏观看 > 数组整体是一个变量,比如数组 `nums` 包含元素 `nums[0]` 和 `nums[1]` 等,而链表是由多个独立的节点对象组成的。**我们通常将头节点当作链表的代称**,比如以上代码中的链表可记作链表 `n0` 。 @@ -546,8 +547,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ### 3.   删除节点 @@ -736,8 +737,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ### 4.   访问节点 @@ -915,8 +916,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ### 5.   查找节点 @@ -1117,8 +1118,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ## 4.2.2   数组 vs. 链表 diff --git a/docs/chapter_array_and_linkedlist/list.md b/docs/chapter_array_and_linkedlist/list.md index 4c4fd4c64..218e169c8 100755 --- a/docs/chapter_array_and_linkedlist/list.md +++ b/docs/chapter_array_and_linkedlist/list.md @@ -141,7 +141,8 @@ comments: true ??? pythontutor "可视化运行" - +
+ 全屏观看 > ### 2.   访问元素 @@ -264,7 +265,8 @@ comments: true ??? pythontutor "可视化运行" - +
+ 全屏观看 > ### 3.   插入与删除元素 @@ -498,7 +500,8 @@ comments: true ??? pythontutor "可视化运行" - +
+ 全屏观看 > ### 4.   遍历列表 @@ -685,7 +688,8 @@ comments: true ??? pythontutor "可视化运行" - +
+ 全屏观看 > ### 5.   拼接列表 @@ -790,7 +794,8 @@ comments: true ??? pythontutor "可视化运行" - +
+ 全屏观看 > ### 6.   排序列表 @@ -881,7 +886,8 @@ comments: true ??? pythontutor "可视化运行" - +
+ 全屏观看 > ## 4.3.2   列表实现 @@ -2145,5 +2151,5 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > diff --git a/docs/chapter_backtracking/backtracking_algorithm.md b/docs/chapter_backtracking/backtracking_algorithm.md index 9e5f6bd49..448d79bfa 100644 --- a/docs/chapter_backtracking/backtracking_algorithm.md +++ b/docs/chapter_backtracking/backtracking_algorithm.md @@ -208,8 +208,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ![在前序遍历中搜索节点](backtracking_algorithm.assets/preorder_find_nodes.png){ class="animation-figure" } @@ -479,8 +479,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 在每次“尝试”中,我们通过将当前节点添加进 `path` 来记录路径;而在“回退”前,我们需要将该节点从 `path` 中弹出,**以恢复本次尝试之前的状态**。 @@ -791,8 +791,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > “剪枝”是一个非常形象的名词。如图 13-3 所示,在搜索过程中,**我们“剪掉”了不满足约束条件的搜索分支**,避免许多无意义的尝试,从而提高了搜索效率。 @@ -1672,8 +1672,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 根据题意,我们在找到值为 $7$ 的节点后应该继续搜索,**因此需要将记录解之后的 `return` 语句删除**。图 13-4 对比了保留或删除 `return` 语句的搜索过程。 diff --git a/docs/chapter_backtracking/n_queens_problem.md b/docs/chapter_backtracking/n_queens_problem.md index 16f467e26..9c1c72b8f 100644 --- a/docs/chapter_backtracking/n_queens_problem.md +++ b/docs/chapter_backtracking/n_queens_problem.md @@ -664,8 +664,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 逐行放置 $n$ 次,考虑列约束,则从第一行到最后一行分别有 $n$、$n-1$、$\dots$、$2$、$1$ 个选择,**因此时间复杂度为 $O(n!)$** 。实际上,根据对角线约束的剪枝也能够大幅缩小搜索空间,因而搜索效率往往优于以上时间复杂度。 diff --git a/docs/chapter_backtracking/permutations_problem.md b/docs/chapter_backtracking/permutations_problem.md index 6a1574d71..7a378527f 100644 --- a/docs/chapter_backtracking/permutations_problem.md +++ b/docs/chapter_backtracking/permutations_problem.md @@ -473,8 +473,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ## 13.2.2   考虑相等元素的情况 @@ -949,8 +949,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 假设元素两两之间互不相同,则 $n$ 个元素共有 $n!$ 种排列(阶乘);在记录结果时,需要复制长度为 $n$ 的列表,使用 $O(n)$ 时间。**因此时间复杂度为 $O(n!n)$** 。 diff --git a/docs/chapter_backtracking/subset_sum_problem.md b/docs/chapter_backtracking/subset_sum_problem.md index bebd5511a..ab897c23f 100644 --- a/docs/chapter_backtracking/subset_sum_problem.md +++ b/docs/chapter_backtracking/subset_sum_problem.md @@ -430,8 +430,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 向以上代码输入数组 $[3, 4, 5]$ 和目标元素 $9$ ,输出结果为 $[3, 3, 3], [4, 5], [5, 4]$ 。**虽然成功找出了所有和为 $9$ 的子集,但其中存在重复的子集 $[4, 5]$ 和 $[5, 4]$** 。 @@ -913,8 +913,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 图 13-12 所示为将数组 $[3, 4, 5]$ 和目标元素 $9$ 输入以上代码后的整体回溯过程。 @@ -1437,8 +1437,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 图 13-14 展示了数组 $[4, 4, 5]$ 和目标元素 $9$ 的回溯过程,共包含四种剪枝操作。请你将图示与代码注释相结合,理解整个搜索过程,以及每种剪枝操作是如何工作的。 diff --git a/docs/chapter_computational_complexity/iteration_and_recursion.md b/docs/chapter_computational_complexity/iteration_and_recursion.md index 45fa47c7e..c5e938ec9 100644 --- a/docs/chapter_computational_complexity/iteration_and_recursion.md +++ b/docs/chapter_computational_complexity/iteration_and_recursion.md @@ -184,8 +184,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 图 2-1 是该求和函数的流程框图。 @@ -395,8 +395,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > **`while` 循环比 `for` 循环的自由度更高**。在 `while` 循环中,我们可以自由地设计条件变量的初始化和更新步骤。 @@ -619,8 +619,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 总的来说,**`for` 循环的代码更加紧凑,`while` 循环更加灵活**,两者都可以实现迭代结构。选择使用哪一个应该根据特定问题的需求来决定。 @@ -838,8 +838,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 图 2-2 是该嵌套循环的流程框图。 @@ -1048,8 +1048,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 图 2-3 展示了该函数的递归过程。 @@ -1249,8 +1249,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 尾递归的执行过程如图 2-5 所示。对比普通递归和尾递归,两者的求和操作的执行点是不同的。 @@ -1462,8 +1462,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 观察以上代码,我们在函数内递归调用了两个函数,**这意味着从一个调用产生了两个调用分支**。如图 2-6 所示,这样不断递归调用下去,最终将产生一棵层数为 $n$ 的「递归树 recursion tree」。 @@ -1785,8 +1785,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 观察以上代码,当递归转化为迭代后,代码变得更加复杂了。尽管迭代和递归在很多情况下可以互相转化,但不一定值得这样做,有以下两点原因。 diff --git a/docs/chapter_computational_complexity/space_complexity.md b/docs/chapter_computational_complexity/space_complexity.md index b707a4322..be097c5e0 100755 --- a/docs/chapter_computational_complexity/space_complexity.md +++ b/docs/chapter_computational_complexity/space_complexity.md @@ -1063,8 +1063,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ### 2.   线性阶 $O(n)$ @@ -1332,8 +1332,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 如图 2-17 所示,此函数的递归深度为 $n$ ,即同时存在 $n$ 个未返回的 `linear_recur()` 函数,使用 $O(n)$ 大小的栈帧空间: @@ -1478,8 +1478,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ![递归函数产生的线性阶空间复杂度](space_complexity.assets/space_complexity_recursive_linear.png){ class="animation-figure" } @@ -1703,8 +1703,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 如图 2-18 所示,该函数的递归深度为 $n$ ,在每个递归函数中都初始化了一个数组,长度分别为 $n$、$n-1$、$\dots$、$2$、$1$ ,平均长度为 $n / 2$ ,因此总体占用 $O(n^2)$ 空间: @@ -1867,8 +1867,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ![递归函数产生的平方阶空间复杂度](space_complexity.assets/space_complexity_recursive_quadratic.png){ class="animation-figure" } @@ -2045,8 +2045,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ![满二叉树产生的指数阶空间复杂度](space_complexity.assets/space_complexity_exponential.png){ class="animation-figure" } diff --git a/docs/chapter_computational_complexity/time_complexity.md b/docs/chapter_computational_complexity/time_complexity.md index 4242619c1..24f21f52b 100755 --- a/docs/chapter_computational_complexity/time_complexity.md +++ b/docs/chapter_computational_complexity/time_complexity.md @@ -1125,8 +1125,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ### 2.   线性阶 $O(n)$ @@ -1282,8 +1282,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 遍历数组和遍历链表等操作的时间复杂度均为 $O(n)$ ,其中 $n$ 为数组或链表的长度: @@ -1455,8 +1455,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 值得注意的是,**输入数据大小 $n$ 需根据输入数据的类型来具体确定**。比如在第一个示例中,变量 $n$ 为输入数据大小;在第二个示例中,数组长度 $n$ 为数据大小。 @@ -1657,8 +1657,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 图 2-10 对比了常数阶、线性阶和平方阶三种时间复杂度。 @@ -1942,8 +1942,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ### 4.   指数阶 $O(2^n)$ @@ -2175,8 +2175,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ![指数阶的时间复杂度](time_complexity.assets/time_complexity_exponential.png){ class="animation-figure" } @@ -2315,8 +2315,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 指数阶增长非常迅速,在穷举法(暴力搜索、回溯等)中比较常见。对于数据规模较大的问题,指数阶是不可接受的,通常需要使用动态规划或贪心算法等来解决。 @@ -2497,8 +2497,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ![对数阶的时间复杂度](time_complexity.assets/time_complexity_logarithmic.png){ class="animation-figure" } @@ -2637,8 +2637,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 对数阶常出现于基于分治策略的算法中,体现了“一分为多”和“化繁为简”的算法思想。它增长缓慢,是仅次于常数阶的理想的时间复杂度。 @@ -2835,8 +2835,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 图 2-13 展示了线性对数阶的生成方式。二叉树的每一层的操作总数都为 $n$ ,树共有 $\log_2 n + 1$ 层,因此时间复杂度为 $O(n \log n)$ 。 @@ -3046,8 +3046,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ![阶乘阶的时间复杂度](time_complexity.assets/time_complexity_factorial.png){ class="animation-figure" } @@ -3413,8 +3413,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 值得说明的是,我们在实际中很少使用最佳时间复杂度,因为通常只有在很小概率下才能达到,可能会带来一定的误导性。**而最差时间复杂度更为实用,因为它给出了一个效率安全值**,让我们可以放心地使用算法。 diff --git a/docs/chapter_data_structure/basic_data_types.md b/docs/chapter_data_structure/basic_data_types.md index 18d9ff3de..21e2b2a05 100644 --- a/docs/chapter_data_structure/basic_data_types.md +++ b/docs/chapter_data_structure/basic_data_types.md @@ -169,4 +169,5 @@ comments: true ??? pythontutor "可视化运行" - +
+ 全屏观看 > diff --git a/docs/chapter_divide_and_conquer/binary_search_recur.md b/docs/chapter_divide_and_conquer/binary_search_recur.md index 105fdcb79..173d0a376 100644 --- a/docs/chapter_divide_and_conquer/binary_search_recur.md +++ b/docs/chapter_divide_and_conquer/binary_search_recur.md @@ -392,5 +392,5 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > diff --git a/docs/chapter_divide_and_conquer/build_binary_tree_problem.md b/docs/chapter_divide_and_conquer/build_binary_tree_problem.md index c79586911..9e7725ff1 100644 --- a/docs/chapter_divide_and_conquer/build_binary_tree_problem.md +++ b/docs/chapter_divide_and_conquer/build_binary_tree_problem.md @@ -447,8 +447,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 图 12-8 展示了构建二叉树的递归过程,各个节点是在向下“递”的过程中建立的,而各条边(引用)是在向上“归”的过程中建立的。 diff --git a/docs/chapter_divide_and_conquer/hanota_problem.md b/docs/chapter_divide_and_conquer/hanota_problem.md index cc26878b3..917e7575d 100644 --- a/docs/chapter_divide_and_conquer/hanota_problem.md +++ b/docs/chapter_divide_and_conquer/hanota_problem.md @@ -485,8 +485,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 如图 12-15 所示,汉诺塔问题形成一棵高度为 $n$ 的递归树,每个节点代表一个子问题,对应一个开启的 `dfs()` 函数,**因此时间复杂度为 $O(2^n)$ ,空间复杂度为 $O(n)$** 。 diff --git a/docs/chapter_dynamic_programming/dp_problem_features.md b/docs/chapter_dynamic_programming/dp_problem_features.md index 180b52caf..0bcd0bc0e 100644 --- a/docs/chapter_dynamic_programming/dp_problem_features.md +++ b/docs/chapter_dynamic_programming/dp_problem_features.md @@ -303,8 +303,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 图 14-7 展示了以上代码的动态规划过程。 @@ -541,8 +541,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ## 14.2.2   无后效性 @@ -878,8 +878,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 在上面的案例中,由于仅需多考虑前面一个状态,因此我们仍然可以通过扩展状态定义,使得问题重新满足无后效性。然而,某些问题具有非常严重的“有后效性”。 diff --git a/docs/chapter_dynamic_programming/dp_solution_pipeline.md b/docs/chapter_dynamic_programming/dp_solution_pipeline.md index b708431f1..5a586198e 100644 --- a/docs/chapter_dynamic_programming/dp_solution_pipeline.md +++ b/docs/chapter_dynamic_programming/dp_solution_pipeline.md @@ -368,8 +368,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 图 14-14 给出了以 $dp[2, 1]$ 为根节点的递归树,其中包含一些重叠子问题,其数量会随着网格 `grid` 的尺寸变大而急剧增多。 @@ -704,8 +704,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 如图 14-15 所示,在引入记忆化后,所有子问题的解只需计算一次,因此时间复杂度取决于状态总数,即网格尺寸 $O(nm)$ 。 @@ -1056,8 +1056,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 图 14-16 展示了最小路径和的状态转移过程,其遍历了整个网格,**因此时间复杂度为 $O(nm)$** 。 @@ -1421,5 +1421,5 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > diff --git a/docs/chapter_dynamic_programming/edit_distance_problem.md b/docs/chapter_dynamic_programming/edit_distance_problem.md index 7c12ff43f..57d68008e 100644 --- a/docs/chapter_dynamic_programming/edit_distance_problem.md +++ b/docs/chapter_dynamic_programming/edit_distance_problem.md @@ -452,8 +452,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 如图 14-30 所示,编辑距离问题的状态转移过程与背包问题非常类似,都可以看作填写一个二维网格的过程。 @@ -910,5 +910,5 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > diff --git a/docs/chapter_dynamic_programming/intro_to_dynamic_programming.md b/docs/chapter_dynamic_programming/intro_to_dynamic_programming.md index c19c6d5f2..ddb619ce9 100644 --- a/docs/chapter_dynamic_programming/intro_to_dynamic_programming.md +++ b/docs/chapter_dynamic_programming/intro_to_dynamic_programming.md @@ -387,8 +387,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ## 14.1.1   方法一:暴力搜索 @@ -645,8 +645,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 图 14-3 展示了暴力搜索形成的递归树。对于问题 $dp[n]$ ,其递归树的深度为 $n$ ,时间复杂度为 $O(2^n)$ 。指数阶属于爆炸式增长,如果我们输入一个比较大的 $n$ ,则会陷入漫长的等待之中。 @@ -987,8 +987,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 观察图 14-4 ,**经过记忆化处理后,所有重叠子问题都只需计算一次,时间复杂度优化至 $O(n)$** ,这是一个巨大的飞跃。 @@ -1246,8 +1246,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 图 14-5 模拟了以上代码的执行过程。 @@ -1469,8 +1469,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 观察以上代码,由于省去了数组 `dp` 占用的空间,因此空间复杂度从 $O(n)$ 降至 $O(1)$ 。 diff --git a/docs/chapter_dynamic_programming/knapsack_problem.md b/docs/chapter_dynamic_programming/knapsack_problem.md index e86af6999..08d0c56f7 100644 --- a/docs/chapter_dynamic_programming/knapsack_problem.md +++ b/docs/chapter_dynamic_programming/knapsack_problem.md @@ -318,8 +318,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 如图 14-18 所示,由于每个物品都会产生不选和选两条搜索分支,因此时间复杂度为 $O(2^n)$ 。 @@ -661,8 +661,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 图 14-19 展示了在记忆化搜索中被剪掉的搜索分支。 @@ -985,8 +985,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 如图 14-20 所示,时间复杂度和空间复杂度都由数组 `dp` 大小决定,即 $O(n \times cap)$ 。 @@ -1343,5 +1343,5 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > diff --git a/docs/chapter_dynamic_programming/unbounded_knapsack_problem.md b/docs/chapter_dynamic_programming/unbounded_knapsack_problem.md index 072307206..fd2b3f071 100644 --- a/docs/chapter_dynamic_programming/unbounded_knapsack_problem.md +++ b/docs/chapter_dynamic_programming/unbounded_knapsack_problem.md @@ -349,8 +349,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ### 3.   空间优化 @@ -674,8 +674,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ## 14.5.2   零钱兑换问题 @@ -1094,8 +1094,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 图 14-25 展示了零钱兑换的动态规划过程,和完全背包问题非常相似。 @@ -1478,8 +1478,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ## 14.5.3   零钱兑换问题 II @@ -1854,8 +1854,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ### 3.   空间优化 @@ -2164,5 +2164,5 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > diff --git a/docs/chapter_graph/graph_operations.md b/docs/chapter_graph/graph_operations.md index 4306024d9..0d021481c 100644 --- a/docs/chapter_graph/graph_operations.md +++ b/docs/chapter_graph/graph_operations.md @@ -1047,8 +1047,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ## 9.2.2   基于邻接表的实现 @@ -2072,8 +2072,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ## 9.2.3   效率对比 diff --git a/docs/chapter_graph/graph_traversal.md b/docs/chapter_graph/graph_traversal.md index 741173618..9331e1b55 100644 --- a/docs/chapter_graph/graph_traversal.md +++ b/docs/chapter_graph/graph_traversal.md @@ -420,8 +420,8 @@ BFS 通常借助队列来实现,代码如下所示。队列具有“先入先 ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 代码相对抽象,建议对照图 9-10 来加深理解。 @@ -828,8 +828,8 @@ BFS 通常借助队列来实现,代码如下所示。队列具有“先入先 ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 深度优先遍历的算法流程如图 9-12 所示。 diff --git a/docs/chapter_greedy/fractional_knapsack_problem.md b/docs/chapter_greedy/fractional_knapsack_problem.md index f89261f9c..5171dd476 100644 --- a/docs/chapter_greedy/fractional_knapsack_problem.md +++ b/docs/chapter_greedy/fractional_knapsack_problem.md @@ -466,8 +466,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 除排序之外,在最差情况下,需要遍历整个物品列表,**因此时间复杂度为 $O(n)$** ,其中 $n$ 为物品数量。 diff --git a/docs/chapter_greedy/greedy_algorithm.md b/docs/chapter_greedy/greedy_algorithm.md index 6e59243ea..a409fcd03 100644 --- a/docs/chapter_greedy/greedy_algorithm.md +++ b/docs/chapter_greedy/greedy_algorithm.md @@ -291,8 +291,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 你可能会不由地发出感叹:So clean !贪心算法仅用约十行代码就解决了零钱兑换问题。 diff --git a/docs/chapter_greedy/max_capacity_problem.md b/docs/chapter_greedy/max_capacity_problem.md index 129f7797c..7268cb9bc 100644 --- a/docs/chapter_greedy/max_capacity_problem.md +++ b/docs/chapter_greedy/max_capacity_problem.md @@ -376,8 +376,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ### 3.   正确性证明 diff --git a/docs/chapter_greedy/max_product_cutting_problem.md b/docs/chapter_greedy/max_product_cutting_problem.md index 049687a71..43c6d650c 100644 --- a/docs/chapter_greedy/max_product_cutting_problem.md +++ b/docs/chapter_greedy/max_product_cutting_problem.md @@ -351,8 +351,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ![最大切分乘积的计算方法](max_product_cutting_problem.assets/max_product_cutting_greedy_calculation.png){ class="animation-figure" } diff --git a/docs/chapter_hashing/hash_algorithm.md b/docs/chapter_hashing/hash_algorithm.md index d5e64e30e..4a51db2e3 100644 --- a/docs/chapter_hashing/hash_algorithm.md +++ b/docs/chapter_hashing/hash_algorithm.md @@ -568,8 +568,8 @@ index = hash(key) % capacity ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 观察发现,每种哈希算法的最后一步都是对大质数 $1000000007$ 取模,以确保哈希值在合适的范围内。值得思考的是,为什么要强调对质数取模,或者说对合数取模的弊端是什么?这是一个有趣的问题。 @@ -877,7 +877,8 @@ $$ ??? pythontutor "可视化运行" - +
+ 全屏观看 > 在许多编程语言中,**只有不可变对象才可作为哈希表的 `key`** 。假如我们将列表(动态数组)作为 `key` ,当列表的内容发生变化时,它的哈希值也随之改变,我们就无法在哈希表中查询到原先的 `value` 了。 diff --git a/docs/chapter_hashing/hash_collision.md b/docs/chapter_hashing/hash_collision.md index 25f49437d..608e7f136 100644 --- a/docs/chapter_hashing/hash_collision.md +++ b/docs/chapter_hashing/hash_collision.md @@ -1318,8 +1318,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 值得注意的是,当链表很长时,查询效率 $O(n)$ 很差。**此时可以将链表转换为“AVL 树”或“红黑树”**,从而将查询操作的时间复杂度优化至 $O(\log n)$ 。 diff --git a/docs/chapter_hashing/hash_map.md b/docs/chapter_hashing/hash_map.md index f8301c2a7..97dde1482 100755 --- a/docs/chapter_hashing/hash_map.md +++ b/docs/chapter_hashing/hash_map.md @@ -285,7 +285,8 @@ comments: true ??? pythontutor "可视化运行" - +
+ 全屏观看 > 哈希表有三种常用的遍历方式:遍历键值对、遍历键和遍历值。示例代码如下: @@ -480,7 +481,8 @@ comments: true ??? pythontutor "可视化运行" - +
+ 全屏观看 > ## 6.1.2   哈希表简单实现 @@ -1643,8 +1645,8 @@ index = hash(key) % capacity ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ## 6.1.3   哈希冲突与扩容 diff --git a/docs/chapter_heap/build_heap.md b/docs/chapter_heap/build_heap.md index b7630d9ec..db2bddb81 100644 --- a/docs/chapter_heap/build_heap.md +++ b/docs/chapter_heap/build_heap.md @@ -204,8 +204,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ## 8.2.3   复杂度分析 diff --git a/docs/chapter_heap/heap.md b/docs/chapter_heap/heap.md index 442aa7c64..50fe00552 100644 --- a/docs/chapter_heap/heap.md +++ b/docs/chapter_heap/heap.md @@ -355,7 +355,8 @@ comments: true ??? pythontutor "可视化运行" - +
+ 全屏观看 > ## 8.1.2   堆的实现 @@ -715,8 +716,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ### 3.   元素入堆 @@ -1093,8 +1094,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ### 4.   堆顶元素出堆 @@ -1613,8 +1614,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ## 8.1.3   堆的常见应用 diff --git a/docs/chapter_heap/top_k.md b/docs/chapter_heap/top_k.md index b6c0b9768..c8e6b6753 100644 --- a/docs/chapter_heap/top_k.md +++ b/docs/chapter_heap/top_k.md @@ -419,8 +419,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 总共执行了 $n$ 轮入堆和出堆,堆的最大长度为 $k$ ,因此时间复杂度为 $O(n \log k)$ 。该方法的效率很高,当 $k$ 较小时,时间复杂度趋向 $O(n)$ ;当 $k$ 较大时,时间复杂度不会超过 $O(n \log n)$ 。 diff --git a/docs/chapter_searching/binary_search.md b/docs/chapter_searching/binary_search.md index 397940d34..973e1360d 100755 --- a/docs/chapter_searching/binary_search.md +++ b/docs/chapter_searching/binary_search.md @@ -336,8 +336,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > **时间复杂度为 $O(\log n)$** :在二分循环中,区间每轮缩小一半,循环次数为 $\log_2 n$ 。 @@ -632,8 +632,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 如图 10-3 所示,在两种区间表示下,二分查找算法的初始化、循环条件和缩小区间操作皆有所不同。 diff --git a/docs/chapter_searching/binary_search_edge.md b/docs/chapter_searching/binary_search_edge.md index 2505be35f..046e8a2a9 100644 --- a/docs/chapter_searching/binary_search_edge.md +++ b/docs/chapter_searching/binary_search_edge.md @@ -201,8 +201,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ## 10.3.2   查找右边界 @@ -426,8 +426,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ### 2.   转化为查找元素 diff --git a/docs/chapter_searching/binary_search_insertion.md b/docs/chapter_searching/binary_search_insertion.md index ad647617c..a32bfea4d 100644 --- a/docs/chapter_searching/binary_search_insertion.md +++ b/docs/chapter_searching/binary_search_insertion.md @@ -274,8 +274,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ## 10.2.2   存在重复元素的情况 @@ -576,8 +576,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > !!! tip diff --git a/docs/chapter_searching/replace_linear_by_hashing.md b/docs/chapter_searching/replace_linear_by_hashing.md index 86c55c986..c78391c73 100755 --- a/docs/chapter_searching/replace_linear_by_hashing.md +++ b/docs/chapter_searching/replace_linear_by_hashing.md @@ -231,8 +231,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 此方法的时间复杂度为 $O(n^2)$ ,空间复杂度为 $O(1)$ ,在大数据量下非常耗时。 @@ -510,8 +510,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 此方法通过哈希查找将时间复杂度从 $O(n^2)$ 降至 $O(n)$ ,大幅提升运行效率。 diff --git a/docs/chapter_sorting/bubble_sort.md b/docs/chapter_sorting/bubble_sort.md index 54d60fffa..c65f97752 100755 --- a/docs/chapter_sorting/bubble_sort.md +++ b/docs/chapter_sorting/bubble_sort.md @@ -279,8 +279,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ## 11.3.2   效率优化 @@ -564,8 +564,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ## 11.3.3   算法特性 diff --git a/docs/chapter_sorting/bucket_sort.md b/docs/chapter_sorting/bucket_sort.md index 75a8efa0a..420c5da74 100644 --- a/docs/chapter_sorting/bucket_sort.md +++ b/docs/chapter_sorting/bucket_sort.md @@ -396,8 +396,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ## 11.8.2   算法特性 diff --git a/docs/chapter_sorting/counting_sort.md b/docs/chapter_sorting/counting_sort.md index 612bcb758..dd0d6b76f 100644 --- a/docs/chapter_sorting/counting_sort.md +++ b/docs/chapter_sorting/counting_sort.md @@ -323,8 +323,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > !!! note "计数排序与桶排序的联系" @@ -785,8 +785,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ## 11.9.3   算法特性 diff --git a/docs/chapter_sorting/heap_sort.md b/docs/chapter_sorting/heap_sort.md index e7102b1f4..dd8fb0fae 100644 --- a/docs/chapter_sorting/heap_sort.md +++ b/docs/chapter_sorting/heap_sort.md @@ -544,8 +544,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ## 11.7.2   算法特性 diff --git a/docs/chapter_sorting/insertion_sort.md b/docs/chapter_sorting/insertion_sort.md index 54a50405e..6d9239861 100755 --- a/docs/chapter_sorting/insertion_sort.md +++ b/docs/chapter_sorting/insertion_sort.md @@ -252,8 +252,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ## 11.4.2   算法特性 diff --git a/docs/chapter_sorting/merge_sort.md b/docs/chapter_sorting/merge_sort.md index 22bfc2094..4164a4967 100755 --- a/docs/chapter_sorting/merge_sort.md +++ b/docs/chapter_sorting/merge_sort.md @@ -635,8 +635,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ## 11.6.2   算法特性 diff --git a/docs/chapter_sorting/quick_sort.md b/docs/chapter_sorting/quick_sort.md index 8f8535f7d..ceca7a5a1 100755 --- a/docs/chapter_sorting/quick_sort.md +++ b/docs/chapter_sorting/quick_sort.md @@ -360,8 +360,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ## 11.5.1   算法流程 @@ -593,8 +593,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ## 11.5.2   算法特性 @@ -1054,8 +1054,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ## 11.5.5   尾递归优化 @@ -1319,5 +1319,5 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > diff --git a/docs/chapter_sorting/radix_sort.md b/docs/chapter_sorting/radix_sort.md index 85e457811..fe66d4a48 100644 --- a/docs/chapter_sorting/radix_sort.md +++ b/docs/chapter_sorting/radix_sort.md @@ -686,8 +686,8 @@ $$ ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > !!! question "为什么从最低位开始排序?" diff --git a/docs/chapter_sorting/selection_sort.md b/docs/chapter_sorting/selection_sort.md index 30636cdeb..4c7eef4dc 100644 --- a/docs/chapter_sorting/selection_sort.md +++ b/docs/chapter_sorting/selection_sort.md @@ -286,8 +286,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ## 11.2.1   算法特性 diff --git a/docs/chapter_stack_and_queue/deque.md b/docs/chapter_stack_and_queue/deque.md index dd0677e86..923dfa894 100644 --- a/docs/chapter_stack_and_queue/deque.md +++ b/docs/chapter_stack_and_queue/deque.md @@ -219,28 +219,21 @@ comments: true // 请注意,由于是数组,unshift() 方法的时间复杂度为 O(n) deque.unshift(3); deque.unshift(1); - console.log("双向队列 deque = ", deque); /* 访问元素 */ const peekFirst = deque[0]; - console.log("队首元素 peekFirst = " + peekFirst); const peekLast = deque[deque.length - 1]; - console.log("队尾元素 peekLast = " + peekLast); /* 元素出队 */ // 请注意,由于是数组,shift() 方法的时间复杂度为 O(n) const popFront = deque.shift(); - console.log("队首出队元素 popFront = " + popFront + ",队首出队后 deque = " + deque); const popBack = deque.pop(); - console.log("队尾出队元素 popBack = " + popBack + ",队尾出队后 deque = " + deque); /* 获取双向队列的长度 */ const size = deque.length; - console.log("双向队列长度 size = " + size); /* 判断双向队列是否为空 */ const isEmpty = size === 0; - console.log("双向队列是否为空 = " + isEmpty); ``` === "TS" @@ -257,28 +250,21 @@ comments: true // 请注意,由于是数组,unshift() 方法的时间复杂度为 O(n) deque.unshift(3); deque.unshift(1); - console.log("双向队列 deque = ", deque); /* 访问元素 */ const peekFirst: number = deque[0]; - console.log("队首元素 peekFirst = " + peekFirst); const peekLast: number = deque[deque.length - 1]; - console.log("队尾元素 peekLast = " + peekLast); /* 元素出队 */ // 请注意,由于是数组,shift() 方法的时间复杂度为 O(n) const popFront: number = deque.shift() as number; - console.log("队首出队元素 popFront = " + popFront + ",队首出队后 deque = " + deque); const popBack: number = deque.pop() as number; - console.log("队尾出队元素 popBack = " + popBack + ",队尾出队后 deque = " + deque); /* 获取双向队列的长度 */ const size: number = deque.length; - console.log("双向队列长度 size = " + size); /* 判断双向队列是否为空 */ const isEmpty: boolean = size === 0; - console.log("双向队列是否为空 = " + isEmpty); ``` === "Dart" @@ -356,7 +342,8 @@ comments: true ??? pythontutor "可视化运行" - +
+ 全屏观看 > ## 5.3.2   双向队列实现 * diff --git a/docs/chapter_stack_and_queue/queue.md b/docs/chapter_stack_and_queue/queue.md index ba4a793d8..474a6e633 100755 --- a/docs/chapter_stack_and_queue/queue.md +++ b/docs/chapter_stack_and_queue/queue.md @@ -320,7 +320,8 @@ comments: true ??? pythontutor "可视化运行" - +
+ 全屏观看 > ## 5.2.2   队列实现 @@ -1213,8 +1214,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ### 2.   基于数组的实现 @@ -2129,8 +2130,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 以上实现的队列仍然具有局限性:其长度不可变。然而,这个问题不难解决,我们可以将数组替换为动态数组,从而引入扩容机制。有兴趣的读者可以尝试自行实现。 diff --git a/docs/chapter_stack_and_queue/stack.md b/docs/chapter_stack_and_queue/stack.md index cbd67e692..552e246a1 100755 --- a/docs/chapter_stack_and_queue/stack.md +++ b/docs/chapter_stack_and_queue/stack.md @@ -196,7 +196,7 @@ comments: true ```javascript title="stack.js" /* 初始化栈 */ - // Javascript 没有内置的栈类,可以把 Array 当作栈来使用 + // JavaScript 没有内置的栈类,可以把 Array 当作栈来使用 const stack = []; /* 元素入栈 */ @@ -223,7 +223,7 @@ comments: true ```typescript title="stack.ts" /* 初始化栈 */ - // Typescript 没有内置的栈类,可以把 Array 当作栈来使用 + // TypeScript 没有内置的栈类,可以把 Array 当作栈来使用 const stack: number[] = []; /* 元素入栈 */ @@ -314,7 +314,8 @@ comments: true ??? pythontutor "可视化运行" - +
+ 全屏观看 > ## 5.1.2   栈的实现 @@ -1085,8 +1086,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ### 2.   基于数组的实现 @@ -1696,8 +1697,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ## 5.1.3   两种实现对比 diff --git a/docs/chapter_tree/array_representation_of_tree.md b/docs/chapter_tree/array_representation_of_tree.md index 8fc5053f3..fc3679cf5 100644 --- a/docs/chapter_tree/array_representation_of_tree.md +++ b/docs/chapter_tree/array_representation_of_tree.md @@ -1163,8 +1163,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ## 7.3.3   优点与局限性 diff --git a/docs/chapter_tree/binary_search_tree.md b/docs/chapter_tree/binary_search_tree.md index ebbdfe05f..7067dd2d4 100755 --- a/docs/chapter_tree/binary_search_tree.md +++ b/docs/chapter_tree/binary_search_tree.md @@ -316,8 +316,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ### 2.   插入节点 @@ -745,8 +745,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > 与查找节点相同,插入节点使用 $O(\log n)$ 时间。 @@ -1467,8 +1467,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ### 4.   中序遍历有序 diff --git a/docs/chapter_tree/binary_tree.md b/docs/chapter_tree/binary_tree.md index 5d14f8268..2a0649847 100644 --- a/docs/chapter_tree/binary_tree.md +++ b/docs/chapter_tree/binary_tree.md @@ -413,7 +413,8 @@ comments: true ??? pythontutor "可视化运行" - +
+ 全屏观看 > ### 2.   插入与删除节点 @@ -560,7 +561,8 @@ comments: true ??? pythontutor "可视化运行" - +
+ 全屏观看 > !!! note diff --git a/docs/chapter_tree/binary_tree_traversal.md b/docs/chapter_tree/binary_tree_traversal.md index 99bad2871..6aaa49e12 100755 --- a/docs/chapter_tree/binary_tree_traversal.md +++ b/docs/chapter_tree/binary_tree_traversal.md @@ -326,8 +326,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > ### 2.   复杂度分析 @@ -761,8 +761,8 @@ comments: true ??? pythontutor "可视化运行" - - 全屏观看 > +
+ 全屏观看 > !!! tip diff --git a/overrides/stylesheets/extra.css b/overrides/stylesheets/extra.css index dc910e534..9d83c57fb 100644 --- a/overrides/stylesheets/extra.css +++ b/overrides/stylesheets/extra.css @@ -206,3 +206,13 @@ body { .md-typeset summary:before { width: 1.25em; } + +.pythontutor-iframe { + width: 125%; + height: 125%; + max-width: 125% !important; + max-height: 125% !important; + transform: scale(0.8); + transform-origin: top left; + border: none; +}