mirror of
https://github.com/krahets/hello-algo.git
synced 2026-05-11 19:17:05 +08:00
deploy
This commit is contained in:
@@ -718,46 +718,46 @@
|
||||
<ul class="md-nav__list" data-md-component="toc" data-md-scrollfix>
|
||||
|
||||
<li class="md-nav__item">
|
||||
<a href="#231-trends-in-statistical-time-growth" class="md-nav__link">
|
||||
<a href="#231-assessing-time-growth-trend" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
2.3.1 Trends In Statistical Time Growth
|
||||
2.3.1 Assessing Time Growth Trend
|
||||
</span>
|
||||
</a>
|
||||
|
||||
</li>
|
||||
|
||||
<li class="md-nav__item">
|
||||
<a href="#232-functions-asymptotic-upper-bounds" class="md-nav__link">
|
||||
<a href="#232-asymptotic-upper-bound" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
2.3.2 Functions Asymptotic Upper Bounds
|
||||
2.3.2 Asymptotic Upper Bound
|
||||
</span>
|
||||
</a>
|
||||
|
||||
</li>
|
||||
|
||||
<li class="md-nav__item">
|
||||
<a href="#233-method-of-projection" class="md-nav__link">
|
||||
<a href="#233-calculation-method" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
2.3.3 Method Of Projection
|
||||
2.3.3 Calculation Method
|
||||
</span>
|
||||
</a>
|
||||
|
||||
<nav class="md-nav" aria-label="2.3.3 Method Of Projection">
|
||||
<nav class="md-nav" aria-label="2.3.3 Calculation Method">
|
||||
<ul class="md-nav__list">
|
||||
|
||||
<li class="md-nav__item">
|
||||
<a href="#1-the-first-step-counting-the-number-of-operations" class="md-nav__link">
|
||||
<a href="#1-step-1-counting-the-number-of-operations" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
1. The First Step: Counting The Number Of Operations
|
||||
1. Step 1: Counting the Number of Operations
|
||||
</span>
|
||||
</a>
|
||||
|
||||
</li>
|
||||
|
||||
<li class="md-nav__item">
|
||||
<a href="#2-step-2-judging-the-asymptotic-upper-bounds" class="md-nav__link">
|
||||
<a href="#2-step-2-determining-the-asymptotic-upper-bound" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
2. Step 2: Judging The Asymptotic Upper Bounds
|
||||
2. Step 2: Determining the Asymptotic Upper Bound
|
||||
</span>
|
||||
</a>
|
||||
|
||||
@@ -769,13 +769,13 @@
|
||||
</li>
|
||||
|
||||
<li class="md-nav__item">
|
||||
<a href="#234-common-types" class="md-nav__link">
|
||||
<a href="#234-common-types-of-time-complexity" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
2.3.4 Common Types
|
||||
2.3.4 Common Types of Time Complexity
|
||||
</span>
|
||||
</a>
|
||||
|
||||
<nav class="md-nav" aria-label="2.3.4 Common Types">
|
||||
<nav class="md-nav" aria-label="2.3.4 Common Types of Time Complexity">
|
||||
<ul class="md-nav__list">
|
||||
|
||||
<li class="md-nav__item">
|
||||
@@ -790,16 +790,52 @@
|
||||
<li class="md-nav__item">
|
||||
<a href="#2-linear-order-on" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
2. Linear Order \(O(N)\)
|
||||
2. Linear Order \(O(n)\)
|
||||
</span>
|
||||
</a>
|
||||
|
||||
</li>
|
||||
|
||||
<li class="md-nav__item">
|
||||
<a href="#3-squared-order-on2" class="md-nav__link">
|
||||
<a href="#3-quadratic-order-on2" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
3. Squared Order \(O(N^2)\)
|
||||
3. Quadratic Order \(O(n^2)\)
|
||||
</span>
|
||||
</a>
|
||||
|
||||
</li>
|
||||
|
||||
<li class="md-nav__item">
|
||||
<a href="#4-exponential-order-o2n" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
4. Exponential Order \(O(2^n)\)
|
||||
</span>
|
||||
</a>
|
||||
|
||||
</li>
|
||||
|
||||
<li class="md-nav__item">
|
||||
<a href="#5-logarithmic-order-olog-n" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
5. Logarithmic Order \(O(\log n)\)
|
||||
</span>
|
||||
</a>
|
||||
|
||||
</li>
|
||||
|
||||
<li class="md-nav__item">
|
||||
<a href="#6-linear-logarithmic-order-on-log-n" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
6. Linear-Logarithmic Order \(O(n \log n)\)
|
||||
</span>
|
||||
</a>
|
||||
|
||||
</li>
|
||||
|
||||
<li class="md-nav__item">
|
||||
<a href="#7-factorial-order-on" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
7. Factorial Order \(O(n!)\)
|
||||
</span>
|
||||
</a>
|
||||
|
||||
@@ -811,51 +847,9 @@
|
||||
</li>
|
||||
|
||||
<li class="md-nav__item">
|
||||
<a href="#235-exponential-order-o2n" class="md-nav__link">
|
||||
<a href="#235-worst-best-and-average-time-complexities" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
2.3.5 Exponential Order \(O(2^N)\)
|
||||
</span>
|
||||
</a>
|
||||
|
||||
<nav class="md-nav" aria-label="2.3.5 Exponential Order \(O(2^N)\)">
|
||||
<ul class="md-nav__list">
|
||||
|
||||
<li class="md-nav__item">
|
||||
<a href="#1-logarithmic-order-olog-n" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
1. Logarithmic Order \(O(\Log N)\)
|
||||
</span>
|
||||
</a>
|
||||
|
||||
</li>
|
||||
|
||||
<li class="md-nav__item">
|
||||
<a href="#2-linear-logarithmic-order-on-log-n" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
2. Linear Logarithmic Order \(O(N \Log N)\)
|
||||
</span>
|
||||
</a>
|
||||
|
||||
</li>
|
||||
|
||||
<li class="md-nav__item">
|
||||
<a href="#3-the-factorial-order-on" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
3. The Factorial Order \(O(N!)\)
|
||||
</span>
|
||||
</a>
|
||||
|
||||
</li>
|
||||
|
||||
</ul>
|
||||
</nav>
|
||||
|
||||
</li>
|
||||
|
||||
<li class="md-nav__item">
|
||||
<a href="#236-worst-best-average-time-complexity" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
2.3.6 Worst, Best, Average Time Complexity
|
||||
2.3.5 Worst, Best, and Average Time Complexities
|
||||
</span>
|
||||
</a>
|
||||
|
||||
@@ -946,46 +940,46 @@
|
||||
<ul class="md-nav__list" data-md-component="toc" data-md-scrollfix>
|
||||
|
||||
<li class="md-nav__item">
|
||||
<a href="#231-trends-in-statistical-time-growth" class="md-nav__link">
|
||||
<a href="#231-assessing-time-growth-trend" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
2.3.1 Trends In Statistical Time Growth
|
||||
2.3.1 Assessing Time Growth Trend
|
||||
</span>
|
||||
</a>
|
||||
|
||||
</li>
|
||||
|
||||
<li class="md-nav__item">
|
||||
<a href="#232-functions-asymptotic-upper-bounds" class="md-nav__link">
|
||||
<a href="#232-asymptotic-upper-bound" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
2.3.2 Functions Asymptotic Upper Bounds
|
||||
2.3.2 Asymptotic Upper Bound
|
||||
</span>
|
||||
</a>
|
||||
|
||||
</li>
|
||||
|
||||
<li class="md-nav__item">
|
||||
<a href="#233-method-of-projection" class="md-nav__link">
|
||||
<a href="#233-calculation-method" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
2.3.3 Method Of Projection
|
||||
2.3.3 Calculation Method
|
||||
</span>
|
||||
</a>
|
||||
|
||||
<nav class="md-nav" aria-label="2.3.3 Method Of Projection">
|
||||
<nav class="md-nav" aria-label="2.3.3 Calculation Method">
|
||||
<ul class="md-nav__list">
|
||||
|
||||
<li class="md-nav__item">
|
||||
<a href="#1-the-first-step-counting-the-number-of-operations" class="md-nav__link">
|
||||
<a href="#1-step-1-counting-the-number-of-operations" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
1. The First Step: Counting The Number Of Operations
|
||||
1. Step 1: Counting the Number of Operations
|
||||
</span>
|
||||
</a>
|
||||
|
||||
</li>
|
||||
|
||||
<li class="md-nav__item">
|
||||
<a href="#2-step-2-judging-the-asymptotic-upper-bounds" class="md-nav__link">
|
||||
<a href="#2-step-2-determining-the-asymptotic-upper-bound" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
2. Step 2: Judging The Asymptotic Upper Bounds
|
||||
2. Step 2: Determining the Asymptotic Upper Bound
|
||||
</span>
|
||||
</a>
|
||||
|
||||
@@ -997,13 +991,13 @@
|
||||
</li>
|
||||
|
||||
<li class="md-nav__item">
|
||||
<a href="#234-common-types" class="md-nav__link">
|
||||
<a href="#234-common-types-of-time-complexity" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
2.3.4 Common Types
|
||||
2.3.4 Common Types of Time Complexity
|
||||
</span>
|
||||
</a>
|
||||
|
||||
<nav class="md-nav" aria-label="2.3.4 Common Types">
|
||||
<nav class="md-nav" aria-label="2.3.4 Common Types of Time Complexity">
|
||||
<ul class="md-nav__list">
|
||||
|
||||
<li class="md-nav__item">
|
||||
@@ -1018,16 +1012,52 @@
|
||||
<li class="md-nav__item">
|
||||
<a href="#2-linear-order-on" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
2. Linear Order \(O(N)\)
|
||||
2. Linear Order \(O(n)\)
|
||||
</span>
|
||||
</a>
|
||||
|
||||
</li>
|
||||
|
||||
<li class="md-nav__item">
|
||||
<a href="#3-squared-order-on2" class="md-nav__link">
|
||||
<a href="#3-quadratic-order-on2" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
3. Squared Order \(O(N^2)\)
|
||||
3. Quadratic Order \(O(n^2)\)
|
||||
</span>
|
||||
</a>
|
||||
|
||||
</li>
|
||||
|
||||
<li class="md-nav__item">
|
||||
<a href="#4-exponential-order-o2n" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
4. Exponential Order \(O(2^n)\)
|
||||
</span>
|
||||
</a>
|
||||
|
||||
</li>
|
||||
|
||||
<li class="md-nav__item">
|
||||
<a href="#5-logarithmic-order-olog-n" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
5. Logarithmic Order \(O(\log n)\)
|
||||
</span>
|
||||
</a>
|
||||
|
||||
</li>
|
||||
|
||||
<li class="md-nav__item">
|
||||
<a href="#6-linear-logarithmic-order-on-log-n" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
6. Linear-Logarithmic Order \(O(n \log n)\)
|
||||
</span>
|
||||
</a>
|
||||
|
||||
</li>
|
||||
|
||||
<li class="md-nav__item">
|
||||
<a href="#7-factorial-order-on" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
7. Factorial Order \(O(n!)\)
|
||||
</span>
|
||||
</a>
|
||||
|
||||
@@ -1039,51 +1069,9 @@
|
||||
</li>
|
||||
|
||||
<li class="md-nav__item">
|
||||
<a href="#235-exponential-order-o2n" class="md-nav__link">
|
||||
<a href="#235-worst-best-and-average-time-complexities" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
2.3.5 Exponential Order \(O(2^N)\)
|
||||
</span>
|
||||
</a>
|
||||
|
||||
<nav class="md-nav" aria-label="2.3.5 Exponential Order \(O(2^N)\)">
|
||||
<ul class="md-nav__list">
|
||||
|
||||
<li class="md-nav__item">
|
||||
<a href="#1-logarithmic-order-olog-n" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
1. Logarithmic Order \(O(\Log N)\)
|
||||
</span>
|
||||
</a>
|
||||
|
||||
</li>
|
||||
|
||||
<li class="md-nav__item">
|
||||
<a href="#2-linear-logarithmic-order-on-log-n" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
2. Linear Logarithmic Order \(O(N \Log N)\)
|
||||
</span>
|
||||
</a>
|
||||
|
||||
</li>
|
||||
|
||||
<li class="md-nav__item">
|
||||
<a href="#3-the-factorial-order-on" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
3. The Factorial Order \(O(N!)\)
|
||||
</span>
|
||||
</a>
|
||||
|
||||
</li>
|
||||
|
||||
</ul>
|
||||
</nav>
|
||||
|
||||
</li>
|
||||
|
||||
<li class="md-nav__item">
|
||||
<a href="#236-worst-best-average-time-complexity" class="md-nav__link">
|
||||
<span class="md-ellipsis">
|
||||
2.3.6 Worst, Best, Average Time Complexity
|
||||
2.3.5 Worst, Best, and Average Time Complexities
|
||||
</span>
|
||||
</a>
|
||||
|
||||
@@ -1126,13 +1114,13 @@
|
||||
|
||||
<!-- Page content -->
|
||||
<h1 id="23-time-complexity">2.3 Time Complexity<a class="headerlink" href="#23-time-complexity" title="Permanent link">¶</a></h1>
|
||||
<p>Runtime can be a visual and accurate reflection of the efficiency of an algorithm. What should we do if we want to accurately predict the runtime of a piece of code?</p>
|
||||
<p>Time complexity is a concept used to measure how the run time of an algorithm increases with the size of the input data. Understanding time complexity is crucial for accurately assessing the efficiency of an algorithm.</p>
|
||||
<ol>
|
||||
<li><strong>Determine the running platform</strong>, including hardware configuration, programming language, system environment, etc., all of which affect the efficiency of the code.</li>
|
||||
<li><strong>Evaluates the running time</strong> required for various computational operations, e.g., the addition operation <code>+</code> takes 1 ns, the multiplication operation <code>*</code> takes 10 ns, the print operation <code>print()</code> takes 5 ns, and so on.</li>
|
||||
<li><strong>Counts all the computational operations in the code</strong> and sums the execution times of all the operations to get the runtime.</li>
|
||||
<li><strong>Determining the Running Platform</strong>: This includes hardware configuration, programming language, system environment, etc., all of which can affect the efficiency of code execution.</li>
|
||||
<li><strong>Evaluating the Run Time for Various Computational Operations</strong>: For instance, an addition operation <code>+</code> might take 1 ns, a multiplication operation <code>*</code> might take 10 ns, a print operation <code>print()</code> might take 5 ns, etc.</li>
|
||||
<li><strong>Counting All the Computational Operations in the Code</strong>: Summing the execution times of all these operations gives the total run time.</li>
|
||||
</ol>
|
||||
<p>For example, in the following code, the input data size is <span class="arithmatex">\(n\)</span> :</p>
|
||||
<p>For example, consider the following code with an input size of <span class="arithmatex">\(n\)</span>:</p>
|
||||
<div class="tabbed-set tabbed-alternate" data-tabs="1:12"><input checked="checked" id="__tabbed_1_1" name="__tabbed_1" type="radio" /><input id="__tabbed_1_2" name="__tabbed_1" type="radio" /><input id="__tabbed_1_3" name="__tabbed_1" type="radio" /><input id="__tabbed_1_4" name="__tabbed_1" type="radio" /><input id="__tabbed_1_5" name="__tabbed_1" type="radio" /><input id="__tabbed_1_6" name="__tabbed_1" type="radio" /><input id="__tabbed_1_7" name="__tabbed_1" type="radio" /><input id="__tabbed_1_8" name="__tabbed_1" type="radio" /><input id="__tabbed_1_9" name="__tabbed_1" type="radio" /><input id="__tabbed_1_10" name="__tabbed_1" type="radio" /><input id="__tabbed_1_11" name="__tabbed_1" type="radio" /><input id="__tabbed_1_12" name="__tabbed_1" type="radio" /><div class="tabbed-labels"><label for="__tabbed_1_1">Python</label><label for="__tabbed_1_2">C++</label><label for="__tabbed_1_3">Java</label><label for="__tabbed_1_4">C#</label><label for="__tabbed_1_5">Go</label><label for="__tabbed_1_6">Swift</label><label for="__tabbed_1_7">JS</label><label for="__tabbed_1_8">TS</label><label for="__tabbed_1_9">Dart</label><label for="__tabbed_1_10">Rust</label><label for="__tabbed_1_11">C</label><label for="__tabbed_1_12">Zig</label></div>
|
||||
<div class="tabbed-content">
|
||||
<div class="tabbed-block">
|
||||
@@ -1291,14 +1279,14 @@
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<p>Based on the above method, the algorithm running time can be obtained as <span class="arithmatex">\(6n + 12\)</span> ns :</p>
|
||||
<p>Using the above method, the run time of the algorithm can be calculated as <span class="arithmatex">\((6n + 12)\)</span> ns:</p>
|
||||
<div class="arithmatex">\[
|
||||
1 + 1 + 10 + (1 + 5) \times n = 6n + 12
|
||||
\]</div>
|
||||
<p>In practice, however, <strong>statistical algorithm runtimes are neither reasonable nor realistic</strong>. First, we do not want to tie the estimation time to the operation platform, because the algorithm needs to run on a variety of different platforms. Second, it is difficult for us to be informed of the runtime of each operation, which makes the prediction process extremely difficult.</p>
|
||||
<h2 id="231-trends-in-statistical-time-growth">2.3.1 Trends In Statistical Time Growth<a class="headerlink" href="#231-trends-in-statistical-time-growth" title="Permanent link">¶</a></h2>
|
||||
<p>The time complexity analysis counts not the algorithm running time, <strong>but the tendency of the algorithm running time to increase as the amount of data gets larger</strong>.</p>
|
||||
<p>The concept of "time-growing trend" is rather abstract, so let's try to understand it through an example. Suppose the size of the input data is <span class="arithmatex">\(n\)</span>, and given three algorithmic functions <code>A</code>, <code>B</code> and <code>C</code>:</p>
|
||||
<p>However, in practice, <strong>counting the run time of an algorithm is neither practical nor reasonable</strong>. First, we don't want to tie the estimated time to the running platform, as algorithms need to run on various platforms. Second, it's challenging to know the run time for each type of operation, making the estimation process difficult.</p>
|
||||
<h2 id="231-assessing-time-growth-trend">2.3.1 Assessing Time Growth Trend<a class="headerlink" href="#231-assessing-time-growth-trend" title="Permanent link">¶</a></h2>
|
||||
<p>Time complexity analysis does not count the algorithm's run time, <strong>but rather the growth trend of the run time as the data volume increases</strong>.</p>
|
||||
<p>Let's understand this concept of "time growth trend" with an example. Assume the input data size is <span class="arithmatex">\(n\)</span>, and consider three algorithms <code>A</code>, <code>B</code>, and <code>C</code>:</p>
|
||||
<div class="tabbed-set tabbed-alternate" data-tabs="2:12"><input checked="checked" id="__tabbed_2_1" name="__tabbed_2" type="radio" /><input id="__tabbed_2_2" name="__tabbed_2" type="radio" /><input id="__tabbed_2_3" name="__tabbed_2" type="radio" /><input id="__tabbed_2_4" name="__tabbed_2" type="radio" /><input id="__tabbed_2_5" name="__tabbed_2" type="radio" /><input id="__tabbed_2_6" name="__tabbed_2" type="radio" /><input id="__tabbed_2_7" name="__tabbed_2" type="radio" /><input id="__tabbed_2_8" name="__tabbed_2" type="radio" /><input id="__tabbed_2_9" name="__tabbed_2" type="radio" /><input id="__tabbed_2_10" name="__tabbed_2" type="radio" /><input id="__tabbed_2_11" name="__tabbed_2" type="radio" /><input id="__tabbed_2_12" name="__tabbed_2" type="radio" /><div class="tabbed-labels"><label for="__tabbed_2_1">Python</label><label for="__tabbed_2_2">C++</label><label for="__tabbed_2_3">Java</label><label for="__tabbed_2_4">C#</label><label for="__tabbed_2_5">Go</label><label for="__tabbed_2_6">Swift</label><label for="__tabbed_2_7">JS</label><label for="__tabbed_2_8">TS</label><label for="__tabbed_2_9">Dart</label><label for="__tabbed_2_10">Rust</label><label for="__tabbed_2_11">C</label><label for="__tabbed_2_12">Zig</label></div>
|
||||
<div class="tabbed-content">
|
||||
<div class="tabbed-block">
|
||||
@@ -1530,23 +1518,23 @@
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<p>The Figure 2-7 shows the time complexity of the above three algorithmic functions.</p>
|
||||
<p>The following figure shows the time complexities of these three algorithms.</p>
|
||||
<ul>
|
||||
<li>Algorithm <code>A</code> has only <span class="arithmatex">\(1\)</span> print operations, and the running time of the algorithm does not increase with <span class="arithmatex">\(n\)</span>. We call the time complexity of this algorithm "constant order".</li>
|
||||
<li>The print operation in algorithm <code>B</code> requires <span class="arithmatex">\(n\)</span> cycles, and the running time of the algorithm increases linearly with <span class="arithmatex">\(n\)</span>. The time complexity of this algorithm is called "linear order".</li>
|
||||
<li>The print operation in algorithm <code>C</code> requires <span class="arithmatex">\(1000000\)</span> loops, which is a long runtime, but it is independent of the size of the input data <span class="arithmatex">\(n\)</span>. Therefore, the time complexity of <code>C</code> is the same as that of <code>A</code>, which is still of "constant order".</li>
|
||||
<li>Algorithm <code>A</code> has just one print operation, and its run time does not grow with <span class="arithmatex">\(n\)</span>. Its time complexity is considered "constant order."</li>
|
||||
<li>Algorithm <code>B</code> involves a print operation looping <span class="arithmatex">\(n\)</span> times, and its run time grows linearly with <span class="arithmatex">\(n\)</span>. Its time complexity is "linear order."</li>
|
||||
<li>Algorithm <code>C</code> has a print operation looping 1,000,000 times. Although it takes a long time, it is independent of the input data size <span class="arithmatex">\(n\)</span>. Therefore, the time complexity of <code>C</code> is the same as <code>A</code>, which is "constant order."</li>
|
||||
</ul>
|
||||
<p><a class="glightbox" href="../time_complexity.assets/time_complexity_simple_example.png" data-type="image" data-width="100%" data-height="auto" data-desc-position="bottom"><img alt="Time growth trends for algorithms A, B and C" class="animation-figure" src="../time_complexity.assets/time_complexity_simple_example.png" /></a></p>
|
||||
<p align="center"> Figure 2-7 Time growth trends for algorithms A, B and C </p>
|
||||
<p><a class="glightbox" href="../time_complexity.assets/time_complexity_simple_example.png" data-type="image" data-width="100%" data-height="auto" data-desc-position="bottom"><img alt="Time Growth Trend of Algorithms A, B, and C" class="animation-figure" src="../time_complexity.assets/time_complexity_simple_example.png" /></a></p>
|
||||
<p align="center"> Figure 2-7 Time Growth Trend of Algorithms A, B, and C </p>
|
||||
|
||||
<p>What are the characteristics of time complexity analysis compared to direct statistical algorithmic running time?</p>
|
||||
<p>Compared to directly counting the run time of an algorithm, what are the characteristics of time complexity analysis?</p>
|
||||
<ul>
|
||||
<li>The <strong>time complexity can effectively evaluate the efficiency of an algorithm</strong>. For example, the running time of algorithm <code>B</code> increases linearly and is slower than algorithm <code>A</code> for <span class="arithmatex">\(n > 1\)</span> and slower than algorithm <code>C</code> for <span class="arithmatex">\(n > 1,000,000\)</span>. In fact, as long as the input data size <span class="arithmatex">\(n\)</span> is large enough, algorithms with "constant order" of complexity will always outperform algorithms with "linear order", which is exactly what the time complexity trend means.</li>
|
||||
<li>The <strong>time complexity of the projection method is simpler</strong>. Obviously, neither the running platform nor the type of computational operation is related to the growth trend of the running time of the algorithm. Therefore, in the time complexity analysis, we can simply consider the execution time of all computation operations as the same "unit time", and thus simplify the "statistics of the running time of computation operations" to the "statistics of the number of computation operations", which is the same as the "statistics of the number of computation operations". The difficulty of the estimation is greatly reduced by considering the execution time of all operations as the same "unit time".</li>
|
||||
<li>There are also some limitations of <strong>time complexity</strong>. For example, although algorithms <code>A</code> and <code>C</code> have the same time complexity, the actual running time varies greatly. Similarly, although the time complexity of algorithm <code>B</code> is higher than that of <code>C</code> , algorithm <code>B</code> significantly outperforms algorithm <code>C</code> when the size of the input data <span class="arithmatex">\(n\)</span> is small. In these cases, it is difficult to judge the efficiency of an algorithm based on time complexity alone. Of course, despite the above problems, complexity analysis is still the most effective and commonly used method to judge the efficiency of algorithms.</li>
|
||||
<li><strong>Time complexity effectively assesses algorithm efficiency</strong>. For instance, algorithm <code>B</code> has linearly growing run time, which is slower than algorithm <code>A</code> when <span class="arithmatex">\(n > 1\)</span> and slower than <code>C</code> when <span class="arithmatex">\(n > 1,000,000\)</span>. In fact, as long as the input data size <span class="arithmatex">\(n\)</span> is sufficiently large, a "constant order" complexity algorithm will always be better than a "linear order" one, demonstrating the essence of time growth trend.</li>
|
||||
<li><strong>Time complexity analysis is more straightforward</strong>. Obviously, the running platform and the types of computational operations are irrelevant to the trend of run time growth. Therefore, in time complexity analysis, we can simply treat the execution time of all computational operations as the same "unit time," simplifying the "computational operation run time count" to a "computational operation count." This significantly reduces the complexity of estimation.</li>
|
||||
<li><strong>Time complexity has its limitations</strong>. For example, although algorithms <code>A</code> and <code>C</code> have the same time complexity, their actual run times can be quite different. Similarly, even though algorithm <code>B</code> has a higher time complexity than <code>C</code>, it is clearly superior when the input data size <span class="arithmatex">\(n\)</span> is small. In these cases, it's difficult to judge the efficiency of algorithms based solely on time complexity. Nonetheless, despite these issues, complexity analysis remains the most effective and commonly used method for evaluating algorithm efficiency.</li>
|
||||
</ul>
|
||||
<h2 id="232-functions-asymptotic-upper-bounds">2.3.2 Functions Asymptotic Upper Bounds<a class="headerlink" href="#232-functions-asymptotic-upper-bounds" title="Permanent link">¶</a></h2>
|
||||
<p>Given a function with input size <span class="arithmatex">\(n\)</span>:</p>
|
||||
<h2 id="232-asymptotic-upper-bound">2.3.2 Asymptotic Upper Bound<a class="headerlink" href="#232-asymptotic-upper-bound" title="Permanent link">¶</a></h2>
|
||||
<p>Consider a function with an input size of <span class="arithmatex">\(n\)</span>:</p>
|
||||
<div class="tabbed-set tabbed-alternate" data-tabs="3:12"><input checked="checked" id="__tabbed_3_1" name="__tabbed_3" type="radio" /><input id="__tabbed_3_2" name="__tabbed_3" type="radio" /><input id="__tabbed_3_3" name="__tabbed_3" type="radio" /><input id="__tabbed_3_4" name="__tabbed_3" type="radio" /><input id="__tabbed_3_5" name="__tabbed_3" type="radio" /><input id="__tabbed_3_6" name="__tabbed_3" type="radio" /><input id="__tabbed_3_7" name="__tabbed_3" type="radio" /><input id="__tabbed_3_8" name="__tabbed_3" type="radio" /><input id="__tabbed_3_9" name="__tabbed_3" type="radio" /><input id="__tabbed_3_10" name="__tabbed_3" type="radio" /><input id="__tabbed_3_11" name="__tabbed_3" type="radio" /><input id="__tabbed_3_12" name="__tabbed_3" type="radio" /><div class="tabbed-labels"><label for="__tabbed_3_1">Python</label><label for="__tabbed_3_2">C++</label><label for="__tabbed_3_3">Java</label><label for="__tabbed_3_4">C#</label><label for="__tabbed_3_5">Go</label><label for="__tabbed_3_6">Swift</label><label for="__tabbed_3_7">JS</label><label for="__tabbed_3_8">TS</label><label for="__tabbed_3_9">Dart</label><label for="__tabbed_3_10">Rust</label><label for="__tabbed_3_11">C</label><label for="__tabbed_3_12">Zig</label></div>
|
||||
<div class="tabbed-content">
|
||||
<div class="tabbed-block">
|
||||
@@ -1694,32 +1682,31 @@
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<p>Let the number of operations of the algorithm be a function of the size of the input data <span class="arithmatex">\(n\)</span>, denoted as <span class="arithmatex">\(T(n)\)</span> , then the number of operations of the above function is:</p>
|
||||
<p>Given a function that represents the number of operations of an algorithm as a function of the input size <span class="arithmatex">\(n\)</span>, denoted as <span class="arithmatex">\(T(n)\)</span>, consider the following example:</p>
|
||||
<div class="arithmatex">\[
|
||||
T(n) = 3 + 2n
|
||||
\]</div>
|
||||
<p><span class="arithmatex">\(T(n)\)</span> is a primary function, which indicates that the trend of its running time growth is linear, and thus its time complexity is of linear order.</p>
|
||||
<p>We denote the time complexity of the linear order as <span class="arithmatex">\(O(n)\)</span> , and this mathematical notation is called the "big <span class="arithmatex">\(O\)</span> notation big-<span class="arithmatex">\(O\)</span> notation", which denotes the "asymptotic upper bound" of the function <span class="arithmatex">\(T(n)\)</span>.</p>
|
||||
<p>Time complexity analysis is essentially the computation of asymptotic upper bounds on the "number of operations function <span class="arithmatex">\(T(n)\)</span>", which has a clear mathematical definition.</p>
|
||||
<p>Since <span class="arithmatex">\(T(n)\)</span> is a linear function, its growth trend is linear, and therefore, its time complexity is of linear order, denoted as <span class="arithmatex">\(O(n)\)</span>. This mathematical notation, known as "big-O notation," represents the "asymptotic upper bound" of the function <span class="arithmatex">\(T(n)\)</span>.</p>
|
||||
<p>In essence, time complexity analysis is about finding the asymptotic upper bound of the "number of operations <span class="arithmatex">\(T(n)\)</span>". It has a precise mathematical definition.</p>
|
||||
<div class="admonition abstract">
|
||||
<p class="admonition-title">Function asymptotic upper bound</p>
|
||||
<p>If there exists a positive real number <span class="arithmatex">\(c\)</span> and a real number <span class="arithmatex">\(n_0\)</span> such that <span class="arithmatex">\(T(n) \leq c \cdot f(n)\)</span> for all <span class="arithmatex">\(n > n_0\)</span> , then it can be argued that <span class="arithmatex">\(f(n)\)</span> gives an asymptotic upper bound on <span class="arithmatex">\(T(n)\)</span> , denoted as <span class="arithmatex">\(T(n) = O(f(n))\)</span> .</p>
|
||||
<p class="admonition-title">Asymptotic Upper Bound</p>
|
||||
<p>If there exist positive real numbers <span class="arithmatex">\(c\)</span> and <span class="arithmatex">\(n_0\)</span> such that for all <span class="arithmatex">\(n > n_0\)</span>, <span class="arithmatex">\(T(n) \leq c \cdot f(n)\)</span>, then <span class="arithmatex">\(f(n)\)</span> is considered an asymptotic upper bound of <span class="arithmatex">\(T(n)\)</span>, denoted as <span class="arithmatex">\(T(n) = O(f(n))\)</span>.</p>
|
||||
</div>
|
||||
<p>As shown in the Figure 2-8 , computing the asymptotic upper bound is a matter of finding a function <span class="arithmatex">\(f(n)\)</span> such that <span class="arithmatex">\(T(n)\)</span> and <span class="arithmatex">\(f(n)\)</span> are at the same growth level as <span class="arithmatex">\(n\)</span> tends to infinity, differing only by a multiple of the constant term <span class="arithmatex">\(c\)</span>.</p>
|
||||
<p><a class="glightbox" href="../time_complexity.assets/asymptotic_upper_bound.png" data-type="image" data-width="100%" data-height="auto" data-desc-position="bottom"><img alt="asymptotic upper bound of function" class="animation-figure" src="../time_complexity.assets/asymptotic_upper_bound.png" /></a></p>
|
||||
<p align="center"> Figure 2-8 asymptotic upper bound of function </p>
|
||||
<p>As illustrated below, calculating the asymptotic upper bound involves finding a function <span class="arithmatex">\(f(n)\)</span> such that, as <span class="arithmatex">\(n\)</span> approaches infinity, <span class="arithmatex">\(T(n)\)</span> and <span class="arithmatex">\(f(n)\)</span> have the same growth order, differing only by a constant factor <span class="arithmatex">\(c\)</span>.</p>
|
||||
<p><a class="glightbox" href="../time_complexity.assets/asymptotic_upper_bound.png" data-type="image" data-width="100%" data-height="auto" data-desc-position="bottom"><img alt="Asymptotic Upper Bound of a Function" class="animation-figure" src="../time_complexity.assets/asymptotic_upper_bound.png" /></a></p>
|
||||
<p align="center"> Figure 2-8 Asymptotic Upper Bound of a Function </p>
|
||||
|
||||
<h2 id="233-method-of-projection">2.3.3 Method Of Projection<a class="headerlink" href="#233-method-of-projection" title="Permanent link">¶</a></h2>
|
||||
<p>Asymptotic upper bounds are a bit heavy on math, so don't worry if you feel you don't have a full solution. Because in practice, we only need to master the projection method, and the mathematical meaning can be gradually comprehended.</p>
|
||||
<p>By definition, after determining <span class="arithmatex">\(f(n)\)</span>, we can get the time complexity <span class="arithmatex">\(O(f(n))\)</span>. So how to determine the asymptotic upper bound <span class="arithmatex">\(f(n)\)</span>? The overall is divided into two steps: first count the number of operations, and then determine the asymptotic upper bound.</p>
|
||||
<h3 id="1-the-first-step-counting-the-number-of-operations">1. The First Step: Counting The Number Of Operations<a class="headerlink" href="#1-the-first-step-counting-the-number-of-operations" title="Permanent link">¶</a></h3>
|
||||
<p>For the code, it is sufficient to calculate from top to bottom line by line. However, since the constant term <span class="arithmatex">\(c \cdot f(n)\)</span> in the above <span class="arithmatex">\(c \cdot f(n)\)</span> can take any size, <strong>the various coefficients and constant terms in the number of operations <span class="arithmatex">\(T(n)\)</span> can be ignored</strong>. Based on this principle, the following counting simplification techniques can be summarized.</p>
|
||||
<h2 id="233-calculation-method">2.3.3 Calculation Method<a class="headerlink" href="#233-calculation-method" title="Permanent link">¶</a></h2>
|
||||
<p>While the concept of asymptotic upper bound might seem mathematically dense, you don't need to fully grasp it right away. Let's first understand the method of calculation, which can be practiced and comprehended over time.</p>
|
||||
<p>Once <span class="arithmatex">\(f(n)\)</span> is determined, we obtain the time complexity <span class="arithmatex">\(O(f(n))\)</span>. But how do we determine the asymptotic upper bound <span class="arithmatex">\(f(n)\)</span>? This process generally involves two steps: counting the number of operations and determining the asymptotic upper bound.</p>
|
||||
<h3 id="1-step-1-counting-the-number-of-operations">1. Step 1: Counting the Number of Operations<a class="headerlink" href="#1-step-1-counting-the-number-of-operations" title="Permanent link">¶</a></h3>
|
||||
<p>This step involves going through the code line by line. However, due to the presence of the constant <span class="arithmatex">\(c\)</span> in <span class="arithmatex">\(c \cdot f(n)\)</span>, <strong>all coefficients and constant terms in <span class="arithmatex">\(T(n)\)</span> can be ignored</strong>. This principle allows for simplification techniques in counting operations.</p>
|
||||
<ol>
|
||||
<li><strong>Ignore the constant terms in <span class="arithmatex">\(T(n)\)</span></strong>. Since none of them are related to <span class="arithmatex">\(n\)</span>, they have no effect on the time complexity.</li>
|
||||
<li><strong>omits all coefficients</strong>. For example, loops <span class="arithmatex">\(2n\)</span> times, <span class="arithmatex">\(5n + 1\)</span> times, etc., can be simplified and notated as <span class="arithmatex">\(n\)</span> times because the coefficients before <span class="arithmatex">\(n\)</span> have no effect on the time complexity.</li>
|
||||
<li><strong>Use multiplication</strong> when loops are nested. The total number of operations is equal to the product of the number of operations of the outer and inner levels of the loop, and each level of the loop can still be nested by applying the techniques in points <code>1.</code> and <code>2.</code> respectively.</li>
|
||||
<li><strong>Ignore constant terms in <span class="arithmatex">\(T(n)\)</span></strong>, as they do not affect the time complexity being independent of <span class="arithmatex">\(n\)</span>.</li>
|
||||
<li><strong>Omit all coefficients</strong>. For example, looping <span class="arithmatex">\(2n\)</span>, <span class="arithmatex">\(5n + 1\)</span> times, etc., can be simplified to <span class="arithmatex">\(n\)</span> times since the coefficient before <span class="arithmatex">\(n\)</span> does not impact the time complexity.</li>
|
||||
<li><strong>Use multiplication for nested loops</strong>. The total number of operations equals the product of the number of operations in each loop, applying the simplification techniques from points 1 and 2 for each loop level.</li>
|
||||
</ol>
|
||||
<p>Given a function, we can use the above trick to count the number of operations.</p>
|
||||
<p>Given a function, we can use these techniques to count operations:</p>
|
||||
<div class="tabbed-set tabbed-alternate" data-tabs="4:12"><input checked="checked" id="__tabbed_4_1" name="__tabbed_4" type="radio" /><input id="__tabbed_4_2" name="__tabbed_4" type="radio" /><input id="__tabbed_4_3" name="__tabbed_4" type="radio" /><input id="__tabbed_4_4" name="__tabbed_4" type="radio" /><input id="__tabbed_4_5" name="__tabbed_4" type="radio" /><input id="__tabbed_4_6" name="__tabbed_4" type="radio" /><input id="__tabbed_4_7" name="__tabbed_4" type="radio" /><input id="__tabbed_4_8" name="__tabbed_4" type="radio" /><input id="__tabbed_4_9" name="__tabbed_4" type="radio" /><input id="__tabbed_4_10" name="__tabbed_4" type="radio" /><input id="__tabbed_4_11" name="__tabbed_4" type="radio" /><input id="__tabbed_4_12" name="__tabbed_4" type="radio" /><div class="tabbed-labels"><label for="__tabbed_4_1">Python</label><label for="__tabbed_4_2">C++</label><label for="__tabbed_4_3">Java</label><label for="__tabbed_4_4">C#</label><label for="__tabbed_4_5">Go</label><label for="__tabbed_4_6">Swift</label><label for="__tabbed_4_7">JS</label><label for="__tabbed_4_8">TS</label><label for="__tabbed_4_9">Dart</label><label for="__tabbed_4_10">Rust</label><label for="__tabbed_4_11">C</label><label for="__tabbed_4_12">Zig</label></div>
|
||||
<div class="tabbed-content">
|
||||
<div class="tabbed-block">
|
||||
@@ -1928,25 +1915,25 @@ T(n) = 3 + 2n
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<p>The following equations show the statistical results before and after using the above technique, both of which were introduced with a time complexity of <span class="arithmatex">\(O(n^2)\)</span> .</p>
|
||||
<p>The formula below shows the counting results before and after simplification, both leading to a time complexity of <span class="arithmatex">\(O(n^2)\)</span>:</p>
|
||||
<div class="arithmatex">\[
|
||||
\begin{aligned}
|
||||
T(n) & = 2n(n + 1) + (5n + 1) + 2 & \text{complete statistics (-.-|||)} \newline
|
||||
T(n) & = 2n(n + 1) + (5n + 1) + 2 & \text{Complete Count (-.-|||)} \newline
|
||||
& = 2n^2 + 7n + 3 \newline
|
||||
T(n) & = n^2 + n & \text{Lazy Stats (o.O)}
|
||||
T(n) & = n^2 + n & \text{Simplified Count (o.O)}
|
||||
\end{aligned}
|
||||
\]</div>
|
||||
<h3 id="2-step-2-judging-the-asymptotic-upper-bounds">2. Step 2: Judging The Asymptotic Upper Bounds<a class="headerlink" href="#2-step-2-judging-the-asymptotic-upper-bounds" title="Permanent link">¶</a></h3>
|
||||
<p><strong>The time complexity is determined by the highest order term in the polynomial <span class="arithmatex">\(T(n)\)</span></strong>. This is because as <span class="arithmatex">\(n\)</span> tends to infinity, the highest order term will play a dominant role and the effects of all other terms can be ignored.</p>
|
||||
<p>The Table 2-2 shows some examples, some of which have exaggerated values to emphasize the conclusion that "the coefficients can't touch the order". As <span class="arithmatex">\(n\)</span> tends to infinity, these constants become irrelevant.</p>
|
||||
<p align="center"> Table 2-2 Time complexity corresponding to different number of operations </p>
|
||||
<h3 id="2-step-2-determining-the-asymptotic-upper-bound">2. Step 2: Determining the Asymptotic Upper Bound<a class="headerlink" href="#2-step-2-determining-the-asymptotic-upper-bound" title="Permanent link">¶</a></h3>
|
||||
<p><strong>The time complexity is determined by the highest order term in <span class="arithmatex">\(T(n)\)</span></strong>. This is because, as <span class="arithmatex">\(n\)</span> approaches infinity, the highest order term dominates, rendering the influence of other terms negligible.</p>
|
||||
<p>The following table illustrates examples of different operation counts and their corresponding time complexities. Some exaggerated values are used to emphasize that coefficients cannot alter the order of growth. When <span class="arithmatex">\(n\)</span> becomes very large, these constants become insignificant.</p>
|
||||
<p align="center"> Table: Time Complexity for Different Operation Counts </p>
|
||||
|
||||
<div class="center-table">
|
||||
<table>
|
||||
<thead>
|
||||
<tr>
|
||||
<th>number of operations <span class="arithmatex">\(T(n)\)</span></th>
|
||||
<th>time complexity <span class="arithmatex">\(O(f(n))\)</span></th>
|
||||
<th>Operation Count <span class="arithmatex">\(T(n)\)</span></th>
|
||||
<th>Time Complexity <span class="arithmatex">\(O(f(n))\)</span></th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
@@ -1973,20 +1960,19 @@ T(n) & = n^2 + n & \text{Lazy Stats (o.O)}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
<h2 id="234-common-types">2.3.4 Common Types<a class="headerlink" href="#234-common-types" title="Permanent link">¶</a></h2>
|
||||
<p>Let the input data size be <span class="arithmatex">\(n\)</span> , the common types of time complexity are shown in the Figure 2-9 (in descending order).</p>
|
||||
<h2 id="234-common-types-of-time-complexity">2.3.4 Common Types of Time Complexity<a class="headerlink" href="#234-common-types-of-time-complexity" title="Permanent link">¶</a></h2>
|
||||
<p>Let's consider the input data size as <span class="arithmatex">\(n\)</span>. The common types of time complexities are illustrated below, arranged from lowest to highest:</p>
|
||||
<div class="arithmatex">\[
|
||||
\begin{aligned}
|
||||
O(1) < O(\log n) < O(n) < O(n \log n) < O(n^2) < O(2^n) < O(n!) \newline
|
||||
\text{constant order} < \text{logarithmic order} < \text{linear order} < \text{linear logarithmic order} < \text{square order} < \text{exponential order} < \text{multiplication order}
|
||||
\text{Constant Order} < \text{Logarithmic Order} < \text{Linear Order} < \text{Linear-Logarithmic Order} < \text{Quadratic Order} < \text{Exponential Order} < \text{Factorial Order}
|
||||
\end{aligned}
|
||||
\]</div>
|
||||
<p><a class="glightbox" href="../time_complexity.assets/time_complexity_common_types.png" data-type="image" data-width="100%" data-height="auto" data-desc-position="bottom"><img alt="Common time complexity types" class="animation-figure" src="../time_complexity.assets/time_complexity_common_types.png" /></a></p>
|
||||
<p align="center"> Figure 2-9 Common time complexity types </p>
|
||||
<p><a class="glightbox" href="../time_complexity.assets/time_complexity_common_types.png" data-type="image" data-width="100%" data-height="auto" data-desc-position="bottom"><img alt="Common Types of Time Complexity" class="animation-figure" src="../time_complexity.assets/time_complexity_common_types.png" /></a></p>
|
||||
<p align="center"> Figure 2-9 Common Types of Time Complexity </p>
|
||||
|
||||
<h3 id="1-constant-order-o1">1. Constant Order <span class="arithmatex">\(O(1)\)</span><a class="headerlink" href="#1-constant-order-o1" title="Permanent link">¶</a></h3>
|
||||
<p>The number of operations of the constant order is independent of the input data size <span class="arithmatex">\(n\)</span>, i.e., it does not change with <span class="arithmatex">\(n\)</span>.</p>
|
||||
<p>In the following function, although the number of operations <code>size</code> may be large, the time complexity is still <span class="arithmatex">\(O(1)\)</span> because it is independent of the input data size <span class="arithmatex">\(n\)</span> :</p>
|
||||
<p>Constant order means the number of operations is independent of the input data size <span class="arithmatex">\(n\)</span>. In the following function, although the number of operations <code>size</code> might be large, the time complexity remains <span class="arithmatex">\(O(1)\)</span> as it's unrelated to <span class="arithmatex">\(n\)</span>:</p>
|
||||
<div class="tabbed-set tabbed-alternate" data-tabs="5:12"><input checked="checked" id="__tabbed_5_1" name="__tabbed_5" type="radio" /><input id="__tabbed_5_2" name="__tabbed_5" type="radio" /><input id="__tabbed_5_3" name="__tabbed_5" type="radio" /><input id="__tabbed_5_4" name="__tabbed_5" type="radio" /><input id="__tabbed_5_5" name="__tabbed_5" type="radio" /><input id="__tabbed_5_6" name="__tabbed_5" type="radio" /><input id="__tabbed_5_7" name="__tabbed_5" type="radio" /><input id="__tabbed_5_8" name="__tabbed_5" type="radio" /><input id="__tabbed_5_9" name="__tabbed_5" type="radio" /><input id="__tabbed_5_10" name="__tabbed_5" type="radio" /><input id="__tabbed_5_11" name="__tabbed_5" type="radio" /><input id="__tabbed_5_12" name="__tabbed_5" type="radio" /><div class="tabbed-labels"><label for="__tabbed_5_1">Python</label><label for="__tabbed_5_2">C++</label><label for="__tabbed_5_3">Java</label><label for="__tabbed_5_4">C#</label><label for="__tabbed_5_5">Go</label><label for="__tabbed_5_6">Swift</label><label for="__tabbed_5_7">JS</label><label for="__tabbed_5_8">TS</label><label for="__tabbed_5_9">Dart</label><label for="__tabbed_5_10">Rust</label><label for="__tabbed_5_11">C</label><label for="__tabbed_5_12">Zig</label></div>
|
||||
<div class="tabbed-content">
|
||||
<div class="tabbed-block">
|
||||
@@ -2130,8 +2116,8 @@ O(1) < O(\log n) < O(n) < O(n \log n) < O(n^2) < O(2^n) < O(n!
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<h3 id="2-linear-order-on">2. Linear Order <span class="arithmatex">\(O(N)\)</span><a class="headerlink" href="#2-linear-order-on" title="Permanent link">¶</a></h3>
|
||||
<p>The number of operations in a linear order grows in linear steps relative to the input data size <span class="arithmatex">\(n\)</span>. Linear orders are usually found in single level loops:</p>
|
||||
<h3 id="2-linear-order-on">2. Linear Order <span class="arithmatex">\(O(n)\)</span><a class="headerlink" href="#2-linear-order-on" title="Permanent link">¶</a></h3>
|
||||
<p>Linear order indicates the number of operations grows linearly with the input data size <span class="arithmatex">\(n\)</span>. Linear order commonly appears in single-loop structures:</p>
|
||||
<div class="tabbed-set tabbed-alternate" data-tabs="6:12"><input checked="checked" id="__tabbed_6_1" name="__tabbed_6" type="radio" /><input id="__tabbed_6_2" name="__tabbed_6" type="radio" /><input id="__tabbed_6_3" name="__tabbed_6" type="radio" /><input id="__tabbed_6_4" name="__tabbed_6" type="radio" /><input id="__tabbed_6_5" name="__tabbed_6" type="radio" /><input id="__tabbed_6_6" name="__tabbed_6" type="radio" /><input id="__tabbed_6_7" name="__tabbed_6" type="radio" /><input id="__tabbed_6_8" name="__tabbed_6" type="radio" /><input id="__tabbed_6_9" name="__tabbed_6" type="radio" /><input id="__tabbed_6_10" name="__tabbed_6" type="radio" /><input id="__tabbed_6_11" name="__tabbed_6" type="radio" /><input id="__tabbed_6_12" name="__tabbed_6" type="radio" /><div class="tabbed-labels"><label for="__tabbed_6_1">Python</label><label for="__tabbed_6_2">C++</label><label for="__tabbed_6_3">Java</label><label for="__tabbed_6_4">C#</label><label for="__tabbed_6_5">Go</label><label for="__tabbed_6_6">Swift</label><label for="__tabbed_6_7">JS</label><label for="__tabbed_6_8">TS</label><label for="__tabbed_6_9">Dart</label><label for="__tabbed_6_10">Rust</label><label for="__tabbed_6_11">C</label><label for="__tabbed_6_12">Zig</label></div>
|
||||
<div class="tabbed-content">
|
||||
<div class="tabbed-block">
|
||||
@@ -2260,7 +2246,7 @@ O(1) < O(\log n) < O(n) < O(n \log n) < O(n^2) < O(2^n) < O(n!
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<p>The time complexity of operations such as traversing an array and traversing a linked list is <span class="arithmatex">\(O(n)\)</span> , where <span class="arithmatex">\(n\)</span> is the length of the array or linked list:</p>
|
||||
<p>Operations like array traversal and linked list traversal have a time complexity of <span class="arithmatex">\(O(n)\)</span>, where <span class="arithmatex">\(n\)</span> is the length of the array or list:</p>
|
||||
<div class="tabbed-set tabbed-alternate" data-tabs="7:12"><input checked="checked" id="__tabbed_7_1" name="__tabbed_7" type="radio" /><input id="__tabbed_7_2" name="__tabbed_7" type="radio" /><input id="__tabbed_7_3" name="__tabbed_7" type="radio" /><input id="__tabbed_7_4" name="__tabbed_7" type="radio" /><input id="__tabbed_7_5" name="__tabbed_7" type="radio" /><input id="__tabbed_7_6" name="__tabbed_7" type="radio" /><input id="__tabbed_7_7" name="__tabbed_7" type="radio" /><input id="__tabbed_7_8" name="__tabbed_7" type="radio" /><input id="__tabbed_7_9" name="__tabbed_7" type="radio" /><input id="__tabbed_7_10" name="__tabbed_7" type="radio" /><input id="__tabbed_7_11" name="__tabbed_7" type="radio" /><input id="__tabbed_7_12" name="__tabbed_7" type="radio" /><div class="tabbed-labels"><label for="__tabbed_7_1">Python</label><label for="__tabbed_7_2">C++</label><label for="__tabbed_7_3">Java</label><label for="__tabbed_7_4">C#</label><label for="__tabbed_7_5">Go</label><label for="__tabbed_7_6">Swift</label><label for="__tabbed_7_7">JS</label><label for="__tabbed_7_8">TS</label><label for="__tabbed_7_9">Dart</label><label for="__tabbed_7_10">Rust</label><label for="__tabbed_7_11">C</label><label for="__tabbed_7_12">Zig</label></div>
|
||||
<div class="tabbed-content">
|
||||
<div class="tabbed-block">
|
||||
@@ -2407,9 +2393,9 @@ O(1) < O(\log n) < O(n) < O(n \log n) < O(n^2) < O(2^n) < O(n!
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<p>It is worth noting that <strong>Input data size <span class="arithmatex">\(n\)</span> needs to be determined specifically</strong> according to the type of input data. For example, in the first example, the variable <span class="arithmatex">\(n\)</span> is the input data size; in the second example, the array length <span class="arithmatex">\(n\)</span> is the data size.</p>
|
||||
<h3 id="3-squared-order-on2">3. Squared Order <span class="arithmatex">\(O(N^2)\)</span><a class="headerlink" href="#3-squared-order-on2" title="Permanent link">¶</a></h3>
|
||||
<p>The number of operations in the square order grows in square steps with respect to the size of the input data <span class="arithmatex">\(n\)</span>. The squared order is usually found in nested loops, where both the outer and inner levels are <span class="arithmatex">\(O(n)\)</span> and therefore overall <span class="arithmatex">\(O(n^2)\)</span>:</p>
|
||||
<p>It's important to note that <strong>the input data size <span class="arithmatex">\(n\)</span> should be determined based on the type of input data</strong>. For example, in the first example, <span class="arithmatex">\(n\)</span> represents the input data size, while in the second example, the length of the array <span class="arithmatex">\(n\)</span> is the data size.</p>
|
||||
<h3 id="3-quadratic-order-on2">3. Quadratic Order <span class="arithmatex">\(O(n^2)\)</span><a class="headerlink" href="#3-quadratic-order-on2" title="Permanent link">¶</a></h3>
|
||||
<p>Quadratic order means the number of operations grows quadratically with the input data size <span class="arithmatex">\(n\)</span>. Quadratic order typically appears in nested loops, where both the outer and inner loops have a time complexity of <span class="arithmatex">\(O(n)\)</span>, resulting in an overall complexity of <span class="arithmatex">\(O(n^2)\)</span>:</p>
|
||||
<div class="tabbed-set tabbed-alternate" data-tabs="8:12"><input checked="checked" id="__tabbed_8_1" name="__tabbed_8" type="radio" /><input id="__tabbed_8_2" name="__tabbed_8" type="radio" /><input id="__tabbed_8_3" name="__tabbed_8" type="radio" /><input id="__tabbed_8_4" name="__tabbed_8" type="radio" /><input id="__tabbed_8_5" name="__tabbed_8" type="radio" /><input id="__tabbed_8_6" name="__tabbed_8" type="radio" /><input id="__tabbed_8_7" name="__tabbed_8" type="radio" /><input id="__tabbed_8_8" name="__tabbed_8" type="radio" /><input id="__tabbed_8_9" name="__tabbed_8" type="radio" /><input id="__tabbed_8_10" name="__tabbed_8" type="radio" /><input id="__tabbed_8_11" name="__tabbed_8" type="radio" /><input id="__tabbed_8_12" name="__tabbed_8" type="radio" /><div class="tabbed-labels"><label for="__tabbed_8_1">Python</label><label for="__tabbed_8_2">C++</label><label for="__tabbed_8_3">Java</label><label for="__tabbed_8_4">C#</label><label for="__tabbed_8_5">Go</label><label for="__tabbed_8_6">Swift</label><label for="__tabbed_8_7">JS</label><label for="__tabbed_8_8">TS</label><label for="__tabbed_8_9">Dart</label><label for="__tabbed_8_10">Rust</label><label for="__tabbed_8_11">C</label><label for="__tabbed_8_12">Zig</label></div>
|
||||
<div class="tabbed-content">
|
||||
<div class="tabbed-block">
|
||||
@@ -2581,11 +2567,11 @@ O(1) < O(\log n) < O(n) < O(n \log n) < O(n^2) < O(2^n) < O(n!
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<p>The Figure 2-10 compares the three time complexities of constant order, linear order and squared order.</p>
|
||||
<p><a class="glightbox" href="../time_complexity.assets/time_complexity_constant_linear_quadratic.png" data-type="image" data-width="100%" data-height="auto" data-desc-position="bottom"><img alt="Time complexity of constant, linear and quadratic orders" class="animation-figure" src="../time_complexity.assets/time_complexity_constant_linear_quadratic.png" /></a></p>
|
||||
<p align="center"> Figure 2-10 Time complexity of constant, linear and quadratic orders </p>
|
||||
<p>The following image compares constant order, linear order, and quadratic order time complexities.</p>
|
||||
<p><a class="glightbox" href="../time_complexity.assets/time_complexity_constant_linear_quadratic.png" data-type="image" data-width="100%" data-height="auto" data-desc-position="bottom"><img alt="Constant, Linear, and Quadratic Order Time Complexities" class="animation-figure" src="../time_complexity.assets/time_complexity_constant_linear_quadratic.png" /></a></p>
|
||||
<p align="center"> Figure 2-10 Constant, Linear, and Quadratic Order Time Complexities </p>
|
||||
|
||||
<p>Taking bubble sort as an example, the outer loop executes <span class="arithmatex">\(n - 1\)</span> times, and the inner loop executes <span class="arithmatex">\(n-1\)</span>, <span class="arithmatex">\(n-2\)</span>, <span class="arithmatex">\(\dots\)</span>, <span class="arithmatex">\(2\)</span>, <span class="arithmatex">\(1\)</span> times, which averages out to <span class="arithmatex">\(n / 2\)</span> times, resulting in a time complexity of <span class="arithmatex">\(O((n - 1) n / 2) = O(n^2)\)</span> .</p>
|
||||
<p>For instance, in bubble sort, the outer loop runs <span class="arithmatex">\(n - 1\)</span> times, and the inner loop runs <span class="arithmatex">\(n-1\)</span>, <span class="arithmatex">\(n-2\)</span>, ..., <span class="arithmatex">\(2\)</span>, <span class="arithmatex">\(1\)</span> times, averaging <span class="arithmatex">\(n / 2\)</span> times, resulting in a time complexity of <span class="arithmatex">\(O((n - 1) n / 2) = O(n^2)\)</span>:</p>
|
||||
<div class="tabbed-set tabbed-alternate" data-tabs="9:12"><input checked="checked" id="__tabbed_9_1" name="__tabbed_9" type="radio" /><input id="__tabbed_9_2" name="__tabbed_9" type="radio" /><input id="__tabbed_9_3" name="__tabbed_9" type="radio" /><input id="__tabbed_9_4" name="__tabbed_9" type="radio" /><input id="__tabbed_9_5" name="__tabbed_9" type="radio" /><input id="__tabbed_9_6" name="__tabbed_9" type="radio" /><input id="__tabbed_9_7" name="__tabbed_9" type="radio" /><input id="__tabbed_9_8" name="__tabbed_9" type="radio" /><input id="__tabbed_9_9" name="__tabbed_9" type="radio" /><input id="__tabbed_9_10" name="__tabbed_9" type="radio" /><input id="__tabbed_9_11" name="__tabbed_9" type="radio" /><input id="__tabbed_9_12" name="__tabbed_9" type="radio" /><div class="tabbed-labels"><label for="__tabbed_9_1">Python</label><label for="__tabbed_9_2">C++</label><label for="__tabbed_9_3">Java</label><label for="__tabbed_9_4">C#</label><label for="__tabbed_9_5">Go</label><label for="__tabbed_9_6">Swift</label><label for="__tabbed_9_7">JS</label><label for="__tabbed_9_8">TS</label><label for="__tabbed_9_9">Dart</label><label for="__tabbed_9_10">Rust</label><label for="__tabbed_9_11">C</label><label for="__tabbed_9_12">Zig</label></div>
|
||||
<div class="tabbed-content">
|
||||
<div class="tabbed-block">
|
||||
@@ -2838,9 +2824,9 @@ O(1) < O(\log n) < O(n) < O(n \log n) < O(n^2) < O(2^n) < O(n!
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<h2 id="235-exponential-order-o2n">2.3.5 Exponential Order <span class="arithmatex">\(O(2^N)\)</span><a class="headerlink" href="#235-exponential-order-o2n" title="Permanent link">¶</a></h2>
|
||||
<p>Cell division in biology is a typical example of exponential growth: the initial state is <span class="arithmatex">\(1\)</span> cells, after one round of division it becomes <span class="arithmatex">\(2\)</span>, after two rounds of division it becomes <span class="arithmatex">\(4\)</span>, and so on, after <span class="arithmatex">\(n\)</span> rounds of division there are <span class="arithmatex">\(2^n\)</span> cells.</p>
|
||||
<p>The Figure 2-11 and the following code simulate the process of cell division with a time complexity of <span class="arithmatex">\(O(2^n)\)</span> .</p>
|
||||
<h3 id="4-exponential-order-o2n">4. Exponential Order <span class="arithmatex">\(O(2^n)\)</span><a class="headerlink" href="#4-exponential-order-o2n" title="Permanent link">¶</a></h3>
|
||||
<p>Biological "cell division" is a classic example of exponential order growth: starting with one cell, it becomes two after one division, four after two divisions, and so on, resulting in <span class="arithmatex">\(2^n\)</span> cells after <span class="arithmatex">\(n\)</span> divisions.</p>
|
||||
<p>The following image and code simulate the cell division process, with a time complexity of <span class="arithmatex">\(O(2^n)\)</span>:</p>
|
||||
<div class="tabbed-set tabbed-alternate" data-tabs="10:12"><input checked="checked" id="__tabbed_10_1" name="__tabbed_10" type="radio" /><input id="__tabbed_10_2" name="__tabbed_10" type="radio" /><input id="__tabbed_10_3" name="__tabbed_10" type="radio" /><input id="__tabbed_10_4" name="__tabbed_10" type="radio" /><input id="__tabbed_10_5" name="__tabbed_10" type="radio" /><input id="__tabbed_10_6" name="__tabbed_10" type="radio" /><input id="__tabbed_10_7" name="__tabbed_10" type="radio" /><input id="__tabbed_10_8" name="__tabbed_10" type="radio" /><input id="__tabbed_10_9" name="__tabbed_10" type="radio" /><input id="__tabbed_10_10" name="__tabbed_10" type="radio" /><input id="__tabbed_10_11" name="__tabbed_10" type="radio" /><input id="__tabbed_10_12" name="__tabbed_10" type="radio" /><div class="tabbed-labels"><label for="__tabbed_10_1">Python</label><label for="__tabbed_10_2">C++</label><label for="__tabbed_10_3">Java</label><label for="__tabbed_10_4">C#</label><label for="__tabbed_10_5">Go</label><label for="__tabbed_10_6">Swift</label><label for="__tabbed_10_7">JS</label><label for="__tabbed_10_8">TS</label><label for="__tabbed_10_9">Dart</label><label for="__tabbed_10_10">Rust</label><label for="__tabbed_10_11">C</label><label for="__tabbed_10_12">Zig</label></div>
|
||||
<div class="tabbed-content">
|
||||
<div class="tabbed-block">
|
||||
@@ -3043,10 +3029,10 @@ O(1) < O(\log n) < O(n) < O(n \log n) < O(n^2) < O(2^n) < O(n!
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<p><a class="glightbox" href="../time_complexity.assets/time_complexity_exponential.png" data-type="image" data-width="100%" data-height="auto" data-desc-position="bottom"><img alt="time complexity of exponential order" class="animation-figure" src="../time_complexity.assets/time_complexity_exponential.png" /></a></p>
|
||||
<p align="center"> Figure 2-11 time complexity of exponential order </p>
|
||||
<p><a class="glightbox" href="../time_complexity.assets/time_complexity_exponential.png" data-type="image" data-width="100%" data-height="auto" data-desc-position="bottom"><img alt="Exponential Order Time Complexity" class="animation-figure" src="../time_complexity.assets/time_complexity_exponential.png" /></a></p>
|
||||
<p align="center"> Figure 2-11 Exponential Order Time Complexity </p>
|
||||
|
||||
<p>In practical algorithms, exponential orders are often found in recursion functions. For example, in the following code, it recursively splits in two and stops after <span class="arithmatex">\(n\)</span> splits:</p>
|
||||
<p>In practice, exponential order often appears in recursive functions. For example, in the code below, it recursively splits into two halves, stopping after <span class="arithmatex">\(n\)</span> divisions:</p>
|
||||
<div class="tabbed-set tabbed-alternate" data-tabs="11:12"><input checked="checked" id="__tabbed_11_1" name="__tabbed_11" type="radio" /><input id="__tabbed_11_2" name="__tabbed_11" type="radio" /><input id="__tabbed_11_3" name="__tabbed_11" type="radio" /><input id="__tabbed_11_4" name="__tabbed_11" type="radio" /><input id="__tabbed_11_5" name="__tabbed_11" type="radio" /><input id="__tabbed_11_6" name="__tabbed_11" type="radio" /><input id="__tabbed_11_7" name="__tabbed_11" type="radio" /><input id="__tabbed_11_8" name="__tabbed_11" type="radio" /><input id="__tabbed_11_9" name="__tabbed_11" type="radio" /><input id="__tabbed_11_10" name="__tabbed_11" type="radio" /><input id="__tabbed_11_11" name="__tabbed_11" type="radio" /><input id="__tabbed_11_12" name="__tabbed_11" type="radio" /><div class="tabbed-labels"><label for="__tabbed_11_1">Python</label><label for="__tabbed_11_2">C++</label><label for="__tabbed_11_3">Java</label><label for="__tabbed_11_4">C#</label><label for="__tabbed_11_5">Go</label><label for="__tabbed_11_6">Swift</label><label for="__tabbed_11_7">JS</label><label for="__tabbed_11_8">TS</label><label for="__tabbed_11_9">Dart</label><label for="__tabbed_11_10">Rust</label><label for="__tabbed_11_11">C</label><label for="__tabbed_11_12">Zig</label></div>
|
||||
<div class="tabbed-content">
|
||||
<div class="tabbed-block">
|
||||
@@ -3156,10 +3142,10 @@ O(1) < O(\log n) < O(n) < O(n \log n) < O(n^2) < O(2^n) < O(n!
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<p>Exponential order grows very rapidly and is more common in exhaustive methods (brute force search, backtracking, etc.). For problems with large data sizes, exponential order is unacceptable and usually requires the use of algorithms such as dynamic programming or greedy algorithms to solve.</p>
|
||||
<h3 id="1-logarithmic-order-olog-n">1. Logarithmic Order <span class="arithmatex">\(O(\Log N)\)</span><a class="headerlink" href="#1-logarithmic-order-olog-n" title="Permanent link">¶</a></h3>
|
||||
<p>In contrast to the exponential order, the logarithmic order reflects the "each round is reduced to half" case. Let the input data size be <span class="arithmatex">\(n\)</span>, and since each round is reduced to half, the number of loops is <span class="arithmatex">\(\log_2 n\)</span>, which is the inverse function of <span class="arithmatex">\(2^n\)</span>.</p>
|
||||
<p>The Figure 2-12 and the code below simulate the process of "reducing each round to half" with a time complexity of <span class="arithmatex">\(O(\log_2 n)\)</span>, which is abbreviated as <span class="arithmatex">\(O(\log n)\)</span>.</p>
|
||||
<p>Exponential order growth is extremely rapid and is commonly seen in exhaustive search methods (brute force, backtracking, etc.). For large-scale problems, exponential order is unacceptable, often requiring dynamic programming or greedy algorithms as solutions.</p>
|
||||
<h3 id="5-logarithmic-order-olog-n">5. Logarithmic Order <span class="arithmatex">\(O(\log n)\)</span><a class="headerlink" href="#5-logarithmic-order-olog-n" title="Permanent link">¶</a></h3>
|
||||
<p>In contrast to exponential order, logarithmic order reflects situations where "the size is halved each round." Given an input data size <span class="arithmatex">\(n\)</span>, since the size is halved each round, the number of iterations is <span class="arithmatex">\(\log_2 n\)</span>, the inverse function of <span class="arithmatex">\(2^n\)</span>.</p>
|
||||
<p>The following image and code simulate the "halving each round" process, with a time complexity of <span class="arithmatex">\(O(\log_2 n)\)</span>, commonly abbreviated as <span class="arithmatex">\(O(\log n)\)</span>:</p>
|
||||
<div class="tabbed-set tabbed-alternate" data-tabs="12:12"><input checked="checked" id="__tabbed_12_1" name="__tabbed_12" type="radio" /><input id="__tabbed_12_2" name="__tabbed_12" type="radio" /><input id="__tabbed_12_3" name="__tabbed_12" type="radio" /><input id="__tabbed_12_4" name="__tabbed_12" type="radio" /><input id="__tabbed_12_5" name="__tabbed_12" type="radio" /><input id="__tabbed_12_6" name="__tabbed_12" type="radio" /><input id="__tabbed_12_7" name="__tabbed_12" type="radio" /><input id="__tabbed_12_8" name="__tabbed_12" type="radio" /><input id="__tabbed_12_9" name="__tabbed_12" type="radio" /><input id="__tabbed_12_10" name="__tabbed_12" type="radio" /><input id="__tabbed_12_11" name="__tabbed_12" type="radio" /><input id="__tabbed_12_12" name="__tabbed_12" type="radio" /><div class="tabbed-labels"><label for="__tabbed_12_1">Python</label><label for="__tabbed_12_2">C++</label><label for="__tabbed_12_3">Java</label><label for="__tabbed_12_4">C#</label><label for="__tabbed_12_5">Go</label><label for="__tabbed_12_6">Swift</label><label for="__tabbed_12_7">JS</label><label for="__tabbed_12_8">TS</label><label for="__tabbed_12_9">Dart</label><label for="__tabbed_12_10">Rust</label><label for="__tabbed_12_11">C</label><label for="__tabbed_12_12">Zig</label></div>
|
||||
<div class="tabbed-content">
|
||||
<div class="tabbed-block">
|
||||
@@ -3309,10 +3295,10 @@ O(1) < O(\log n) < O(n) < O(n \log n) < O(n^2) < O(2^n) < O(n!
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<p><a class="glightbox" href="../time_complexity.assets/time_complexity_logarithmic.png" data-type="image" data-width="100%" data-height="auto" data-desc-position="bottom"><img alt="time complexity of logarithmic order" class="animation-figure" src="../time_complexity.assets/time_complexity_logarithmic.png" /></a></p>
|
||||
<p align="center"> Figure 2-12 time complexity of logarithmic order </p>
|
||||
<p><a class="glightbox" href="../time_complexity.assets/time_complexity_logarithmic.png" data-type="image" data-width="100%" data-height="auto" data-desc-position="bottom"><img alt="Logarithmic Order Time Complexity" class="animation-figure" src="../time_complexity.assets/time_complexity_logarithmic.png" /></a></p>
|
||||
<p align="center"> Figure 2-12 Logarithmic Order Time Complexity </p>
|
||||
|
||||
<p>Similar to the exponential order, the logarithmic order is often found in recursion functions. The following code forms a recursion tree of height <span class="arithmatex">\(\log_2 n\)</span>:</p>
|
||||
<p>Like exponential order, logarithmic order also frequently appears in recursive functions. The code below forms a recursive tree of height <span class="arithmatex">\(\log_2 n\)</span>:</p>
|
||||
<div class="tabbed-set tabbed-alternate" data-tabs="13:12"><input checked="checked" id="__tabbed_13_1" name="__tabbed_13" type="radio" /><input id="__tabbed_13_2" name="__tabbed_13" type="radio" /><input id="__tabbed_13_3" name="__tabbed_13" type="radio" /><input id="__tabbed_13_4" name="__tabbed_13" type="radio" /><input id="__tabbed_13_5" name="__tabbed_13" type="radio" /><input id="__tabbed_13_6" name="__tabbed_13" type="radio" /><input id="__tabbed_13_7" name="__tabbed_13" type="radio" /><input id="__tabbed_13_8" name="__tabbed_13" type="radio" /><input id="__tabbed_13_9" name="__tabbed_13" type="radio" /><input id="__tabbed_13_10" name="__tabbed_13" type="radio" /><input id="__tabbed_13_11" name="__tabbed_13" type="radio" /><input id="__tabbed_13_12" name="__tabbed_13" type="radio" /><div class="tabbed-labels"><label for="__tabbed_13_1">Python</label><label for="__tabbed_13_2">C++</label><label for="__tabbed_13_3">Java</label><label for="__tabbed_13_4">C#</label><label for="__tabbed_13_5">Go</label><label for="__tabbed_13_6">Swift</label><label for="__tabbed_13_7">JS</label><label for="__tabbed_13_8">TS</label><label for="__tabbed_13_9">Dart</label><label for="__tabbed_13_10">Rust</label><label for="__tabbed_13_11">C</label><label for="__tabbed_13_12">Zig</label></div>
|
||||
<div class="tabbed-content">
|
||||
<div class="tabbed-block">
|
||||
@@ -3422,17 +3408,17 @@ O(1) < O(\log n) < O(n) < O(n \log n) < O(n^2) < O(2^n) < O(n!
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<p>Logarithmic order is often found in algorithms based on the divide and conquer strategy, which reflects the algorithmic ideas of "dividing one into many" and "simplifying the complexity into simplicity". It grows slowly and is the second most desirable time complexity after constant order.</p>
|
||||
<p>Logarithmic order is typical in algorithms based on the divide-and-conquer strategy, embodying the "split into many" and "simplify complex problems" approach. It's slow-growing and is the most ideal time complexity after constant order.</p>
|
||||
<div class="admonition tip">
|
||||
<p class="admonition-title">What is the base of <span class="arithmatex">\(O(\log n)\)</span>?</p>
|
||||
<p>To be precise, the corresponding time complexity of "one divided into <span class="arithmatex">\(m\)</span>" is <span class="arithmatex">\(O(\log_m n)\)</span> . And by using the logarithmic permutation formula, we can get equal time complexity with different bases:</p>
|
||||
<p>Technically, "splitting into <span class="arithmatex">\(m\)</span>" corresponds to a time complexity of <span class="arithmatex">\(O(\log_m n)\)</span>. Using the logarithm base change formula, we can equate different logarithmic complexities:</p>
|
||||
<div class="arithmatex">\[
|
||||
O(\log_m n) = O(\log_k n / \log_k m) = O(\log_k n)
|
||||
\]</div>
|
||||
<p>That is, the base <span class="arithmatex">\(m\)</span> can be converted without affecting the complexity. Therefore we usually omit the base <span class="arithmatex">\(m\)</span> and write the logarithmic order directly as <span class="arithmatex">\(O(\log n)\)</span>.</p>
|
||||
<p>This means the base <span class="arithmatex">\(m\)</span> can be changed without affecting the complexity. Therefore, we often omit the base <span class="arithmatex">\(m\)</span> and simply denote logarithmic order as <span class="arithmatex">\(O(\log n)\)</span>.</p>
|
||||
</div>
|
||||
<h3 id="2-linear-logarithmic-order-on-log-n">2. Linear Logarithmic Order <span class="arithmatex">\(O(N \Log N)\)</span><a class="headerlink" href="#2-linear-logarithmic-order-on-log-n" title="Permanent link">¶</a></h3>
|
||||
<p>The linear logarithmic order is often found in nested loops, and the time complexity of the two levels of loops is <span class="arithmatex">\(O(\log n)\)</span> and <span class="arithmatex">\(O(n)\)</span> respectively. The related code is as follows:</p>
|
||||
<h3 id="6-linear-logarithmic-order-on-log-n">6. Linear-Logarithmic Order <span class="arithmatex">\(O(n \log n)\)</span><a class="headerlink" href="#6-linear-logarithmic-order-on-log-n" title="Permanent link">¶</a></h3>
|
||||
<p>Linear-logarithmic order often appears in nested loops, with the complexities of the two loops being <span class="arithmatex">\(O(\log n)\)</span> and <span class="arithmatex">\(O(n)\)</span> respectively. The related code is as follows:</p>
|
||||
<div class="tabbed-set tabbed-alternate" data-tabs="14:12"><input checked="checked" id="__tabbed_14_1" name="__tabbed_14" type="radio" /><input id="__tabbed_14_2" name="__tabbed_14" type="radio" /><input id="__tabbed_14_3" name="__tabbed_14" type="radio" /><input id="__tabbed_14_4" name="__tabbed_14" type="radio" /><input id="__tabbed_14_5" name="__tabbed_14" type="radio" /><input id="__tabbed_14_6" name="__tabbed_14" type="radio" /><input id="__tabbed_14_7" name="__tabbed_14" type="radio" /><input id="__tabbed_14_8" name="__tabbed_14" type="radio" /><input id="__tabbed_14_9" name="__tabbed_14" type="radio" /><input id="__tabbed_14_10" name="__tabbed_14" type="radio" /><input id="__tabbed_14_11" name="__tabbed_14" type="radio" /><input id="__tabbed_14_12" name="__tabbed_14" type="radio" /><div class="tabbed-labels"><label for="__tabbed_14_1">Python</label><label for="__tabbed_14_2">C++</label><label for="__tabbed_14_3">Java</label><label for="__tabbed_14_4">C#</label><label for="__tabbed_14_5">Go</label><label for="__tabbed_14_6">Swift</label><label for="__tabbed_14_7">JS</label><label for="__tabbed_14_8">TS</label><label for="__tabbed_14_9">Dart</label><label for="__tabbed_14_10">Rust</label><label for="__tabbed_14_11">C</label><label for="__tabbed_14_12">Zig</label></div>
|
||||
<div class="tabbed-content">
|
||||
<div class="tabbed-block">
|
||||
@@ -3590,17 +3576,17 @@ O(\log_m n) = O(\log_k n / \log_k m) = O(\log_k n)
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<p>The Figure 2-13 shows how the linear logarithmic order is generated. The total number of operations at each level of the binary tree is <span class="arithmatex">\(n\)</span> , and the tree has a total of <span class="arithmatex">\(\log_2 n + 1\)</span> levels, resulting in a time complexity of <span class="arithmatex">\(O(n\log n)\)</span> .</p>
|
||||
<p><a class="glightbox" href="../time_complexity.assets/time_complexity_logarithmic_linear.png" data-type="image" data-width="100%" data-height="auto" data-desc-position="bottom"><img alt="Time complexity of linear logarithmic order" class="animation-figure" src="../time_complexity.assets/time_complexity_logarithmic_linear.png" /></a></p>
|
||||
<p align="center"> Figure 2-13 Time complexity of linear logarithmic order </p>
|
||||
<p>The image below demonstrates how linear-logarithmic order is generated. Each level of a binary tree has <span class="arithmatex">\(n\)</span> operations, and the tree has <span class="arithmatex">\(\log_2 n + 1\)</span> levels, resulting in a time complexity of <span class="arithmatex">\(O(n \log n)\)</span>.</p>
|
||||
<p><a class="glightbox" href="../time_complexity.assets/time_complexity_logarithmic_linear.png" data-type="image" data-width="100%" data-height="auto" data-desc-position="bottom"><img alt="Linear-Logarithmic Order Time Complexity" class="animation-figure" src="../time_complexity.assets/time_complexity_logarithmic_linear.png" /></a></p>
|
||||
<p align="center"> Figure 2-13 Linear-Logarithmic Order Time Complexity </p>
|
||||
|
||||
<p>Mainstream sorting algorithms typically have a time complexity of <span class="arithmatex">\(O(n \log n)\)</span> , such as quick sort, merge sort, heap sort, etc.</p>
|
||||
<h3 id="3-the-factorial-order-on">3. The Factorial Order <span class="arithmatex">\(O(N!)\)</span><a class="headerlink" href="#3-the-factorial-order-on" title="Permanent link">¶</a></h3>
|
||||
<p>The factorial order corresponds to the mathematical "permutations problem". Given <span class="arithmatex">\(n\)</span> elements that do not repeat each other, find all possible permutations of them, the number of permutations being:</p>
|
||||
<p>Mainstream sorting algorithms typically have a time complexity of <span class="arithmatex">\(O(n \log n)\)</span>, such as quicksort, mergesort, and heapsort.</p>
|
||||
<h3 id="7-factorial-order-on">7. Factorial Order <span class="arithmatex">\(O(n!)\)</span><a class="headerlink" href="#7-factorial-order-on" title="Permanent link">¶</a></h3>
|
||||
<p>Factorial order corresponds to the mathematical problem of "full permutation." Given <span class="arithmatex">\(n\)</span> distinct elements, the total number of possible permutations is:</p>
|
||||
<div class="arithmatex">\[
|
||||
n! = n \times (n - 1) \times (n - 2) \times \dots \times 2 \times 1
|
||||
\]</div>
|
||||
<p>Factorials are usually implemented using recursion. As shown in the Figure 2-14 and in the code below, the first level splits <span class="arithmatex">\(n\)</span>, the second level splits <span class="arithmatex">\(n - 1\)</span>, and so on, until the splitting stops at the <span class="arithmatex">\(n\)</span>th level:</p>
|
||||
<p>Factorials are typically implemented using recursion. As shown in the image and code below, the first level splits into <span class="arithmatex">\(n\)</span> branches, the second level into <span class="arithmatex">\(n - 1\)</span> branches, and so on, stopping after the <span class="arithmatex">\(n\)</span>th level:</p>
|
||||
<div class="tabbed-set tabbed-alternate" data-tabs="15:12"><input checked="checked" id="__tabbed_15_1" name="__tabbed_15" type="radio" /><input id="__tabbed_15_2" name="__tabbed_15" type="radio" /><input id="__tabbed_15_3" name="__tabbed_15" type="radio" /><input id="__tabbed_15_4" name="__tabbed_15" type="radio" /><input id="__tabbed_15_5" name="__tabbed_15" type="radio" /><input id="__tabbed_15_6" name="__tabbed_15" type="radio" /><input id="__tabbed_15_7" name="__tabbed_15" type="radio" /><input id="__tabbed_15_8" name="__tabbed_15" type="radio" /><input id="__tabbed_15_9" name="__tabbed_15" type="radio" /><input id="__tabbed_15_10" name="__tabbed_15" type="radio" /><input id="__tabbed_15_11" name="__tabbed_15" type="radio" /><input id="__tabbed_15_12" name="__tabbed_15" type="radio" /><div class="tabbed-labels"><label for="__tabbed_15_1">Python</label><label for="__tabbed_15_2">C++</label><label for="__tabbed_15_3">Java</label><label for="__tabbed_15_4">C#</label><label for="__tabbed_15_5">Go</label><label for="__tabbed_15_6">Swift</label><label for="__tabbed_15_7">JS</label><label for="__tabbed_15_8">TS</label><label for="__tabbed_15_9">Dart</label><label for="__tabbed_15_10">Rust</label><label for="__tabbed_15_11">C</label><label for="__tabbed_15_12">Zig</label></div>
|
||||
<div class="tabbed-content">
|
||||
<div class="tabbed-block">
|
||||
@@ -3769,17 +3755,17 @@ n! = n \times (n - 1) \times (n - 2) \times \dots \times 2 \times 1
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<p><a class="glightbox" href="../time_complexity.assets/time_complexity_factorial.png" data-type="image" data-width="100%" data-height="auto" data-desc-position="bottom"><img alt="Time complexity of the factorial order" class="animation-figure" src="../time_complexity.assets/time_complexity_factorial.png" /></a></p>
|
||||
<p align="center"> Figure 2-14 Time complexity of the factorial order </p>
|
||||
<p><a class="glightbox" href="../time_complexity.assets/time_complexity_factorial.png" data-type="image" data-width="100%" data-height="auto" data-desc-position="bottom"><img alt="Factorial Order Time Complexity" class="animation-figure" src="../time_complexity.assets/time_complexity_factorial.png" /></a></p>
|
||||
<p align="center"> Figure 2-14 Factorial Order Time Complexity </p>
|
||||
|
||||
<p>Note that since there is always <span class="arithmatex">\(n! > 2^n\)</span> when <span class="arithmatex">\(n \geq 4\)</span>, the factorial order grows faster than the exponential order, and is also unacceptable when <span class="arithmatex">\(n\)</span> is large.</p>
|
||||
<h2 id="236-worst-best-average-time-complexity">2.3.6 Worst, Best, Average Time Complexity<a class="headerlink" href="#236-worst-best-average-time-complexity" title="Permanent link">¶</a></h2>
|
||||
<p><strong>The time efficiency of algorithms is often not fixed, but is related to the distribution of the input data</strong>. Suppose an array <code>nums</code> of length <span class="arithmatex">\(n\)</span> is input, where <code>nums</code> consists of numbers from <span class="arithmatex">\(1\)</span> to <span class="arithmatex">\(n\)</span>, each of which occurs only once; however, the order of the elements is randomly upset, and the goal of the task is to return the index of element <span class="arithmatex">\(1\)</span>. We can draw the following conclusion.</p>
|
||||
<p>Note that factorial order grows even faster than exponential order; it's unacceptable for larger <span class="arithmatex">\(n\)</span> values.</p>
|
||||
<h2 id="235-worst-best-and-average-time-complexities">2.3.5 Worst, Best, and Average Time Complexities<a class="headerlink" href="#235-worst-best-and-average-time-complexities" title="Permanent link">¶</a></h2>
|
||||
<p><strong>The time efficiency of an algorithm is often not fixed but depends on the distribution of the input data</strong>. Assume we have an array <code>nums</code> of length <span class="arithmatex">\(n\)</span>, consisting of numbers from <span class="arithmatex">\(1\)</span> to <span class="arithmatex">\(n\)</span>, each appearing only once, but in a randomly shuffled order. The task is to return the index of the element <span class="arithmatex">\(1\)</span>. We can draw the following conclusions:</p>
|
||||
<ul>
|
||||
<li>When <code>nums = [? , ? , ... , 1]</code> , i.e., when the end element is <span class="arithmatex">\(1\)</span>, a complete traversal of the array is required, <strong>to reach the worst time complexity <span class="arithmatex">\(O(n)\)</span></strong> .</li>
|
||||
<li>When <code>nums = [1, ? , ? , ...]</code> , i.e., when the first element is <span class="arithmatex">\(1\)</span> , there is no need to continue traversing the array no matter how long it is, <strong>reaching the optimal time complexity <span class="arithmatex">\(\Omega(1)\)</span></strong> .</li>
|
||||
<li>When <code>nums = [?, ?, ..., 1]</code>, that is, when the last element is <span class="arithmatex">\(1\)</span>, it requires a complete traversal of the array, <strong>achieving the worst-case time complexity of <span class="arithmatex">\(O(n)\)</span></strong>.</li>
|
||||
<li>When <code>nums = [1, ?, ?, ...]</code>, that is, when the first element is <span class="arithmatex">\(1\)</span>, no matter the length of the array, no further traversal is needed, <strong>achieving the best-case time complexity of <span class="arithmatex">\(\Omega(1)\)</span></strong>.</li>
|
||||
</ul>
|
||||
<p>The "worst time complexity" corresponds to the asymptotic upper bound of the function and is denoted by the large <span class="arithmatex">\(O\)</span> notation. Correspondingly, the "optimal time complexity" corresponds to the asymptotic lower bound of the function and is denoted in <span class="arithmatex">\(\Omega\)</span> notation:</p>
|
||||
<p>The "worst-case time complexity" corresponds to the asymptotic upper bound, denoted by the big <span class="arithmatex">\(O\)</span> notation. Correspondingly, the "best-case time complexity" corresponds to the asymptotic lower bound, denoted by <span class="arithmatex">\(\Omega\)</span>:</p>
|
||||
<div class="tabbed-set tabbed-alternate" data-tabs="16:12"><input checked="checked" id="__tabbed_16_1" name="__tabbed_16" type="radio" /><input id="__tabbed_16_2" name="__tabbed_16" type="radio" /><input id="__tabbed_16_3" name="__tabbed_16" type="radio" /><input id="__tabbed_16_4" name="__tabbed_16" type="radio" /><input id="__tabbed_16_5" name="__tabbed_16" type="radio" /><input id="__tabbed_16_6" name="__tabbed_16" type="radio" /><input id="__tabbed_16_7" name="__tabbed_16" type="radio" /><input id="__tabbed_16_8" name="__tabbed_16" type="radio" /><input id="__tabbed_16_9" name="__tabbed_16" type="radio" /><input id="__tabbed_16_10" name="__tabbed_16" type="radio" /><input id="__tabbed_16_11" name="__tabbed_16" type="radio" /><input id="__tabbed_16_12" name="__tabbed_16" type="radio" /><div class="tabbed-labels"><label for="__tabbed_16_1">Python</label><label for="__tabbed_16_2">C++</label><label for="__tabbed_16_3">Java</label><label for="__tabbed_16_4">C#</label><label for="__tabbed_16_5">Go</label><label for="__tabbed_16_6">Swift</label><label for="__tabbed_16_7">JS</label><label for="__tabbed_16_8">TS</label><label for="__tabbed_16_9">Dart</label><label for="__tabbed_16_10">Rust</label><label for="__tabbed_16_11">C</label><label for="__tabbed_16_12">Zig</label></div>
|
||||
<div class="tabbed-content">
|
||||
<div class="tabbed-block">
|
||||
@@ -4107,13 +4093,13 @@ n! = n \times (n - 1) \times (n - 2) \times \dots \times 2 \times 1
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<p>It is worth stating that we rarely use the optimal time complexity in practice because it is usually only attainable with a small probability and may be somewhat misleading. <strong>whereas the worst time complexity is more practical because it gives a safe value for efficiency and allows us to use the algorithm with confidence</strong>.</p>
|
||||
<p>From the above examples, it can be seen that the worst or best time complexity only occurs in "special data distributions", and the probability of these cases may be very small, which does not truly reflect the efficiency of the algorithm. In contrast, <strong>the average time complexity of can reflect the efficiency of the algorithm under random input data</strong>, which is denoted by the <span class="arithmatex">\(\Theta\)</span> notation.</p>
|
||||
<p>For some algorithms, we can simply derive the average case under a random data distribution. For example, in the above example, since the input array is scrambled, the probability of an element <span class="arithmatex">\(1\)</span> appearing at any index is equal, so the average number of loops of the algorithm is half of the length of the array <span class="arithmatex">\(n / 2\)</span> , and the average time complexity is <span class="arithmatex">\(\Theta(n / 2) = \Theta(n)\)</span> .</p>
|
||||
<p>However, for more complex algorithms, calculating the average time complexity is often difficult because it is hard to analyze the overall mathematical expectation given the data distribution. In this case, we usually use the worst time complexity as a criterion for the efficiency of the algorithm.</p>
|
||||
<p>It's important to note that the best-case time complexity is rarely used in practice, as it is usually only achievable under very low probabilities and might be misleading. <strong>The worst-case time complexity is more practical as it provides a safety value for efficiency</strong>, allowing us to confidently use the algorithm.</p>
|
||||
<p>From the above example, it's clear that both the worst-case and best-case time complexities only occur under "special data distributions," which may have a small probability of occurrence and may not accurately reflect the algorithm's run efficiency. In contrast, <strong>the average time complexity can reflect the algorithm's efficiency under random input data</strong>, denoted by the <span class="arithmatex">\(\Theta\)</span> notation.</p>
|
||||
<p>For some algorithms, we can simply estimate the average case under a random data distribution. For example, in the aforementioned example, since the input array is shuffled, the probability of element <span class="arithmatex">\(1\)</span> appearing at any index is equal. Therefore, the average number of loops for the algorithm is half the length of the array <span class="arithmatex">\(n / 2\)</span>, giving an average time complexity of <span class="arithmatex">\(\Theta(n / 2) = \Theta(n)\)</span>.</p>
|
||||
<p>However, calculating the average time complexity for more complex algorithms can be quite difficult, as it's challenging to analyze the overall mathematical expectation under the data distribution. In such cases, we usually use the worst-case time complexity as the standard for judging the efficiency of the algorithm.</p>
|
||||
<div class="admonition question">
|
||||
<p class="admonition-title">Why do you rarely see the <span class="arithmatex">\(\Theta\)</span> symbol?</p>
|
||||
<p>Perhaps because the <span class="arithmatex">\(O\)</span> symbol is so catchy, we often use it to denote average time complexity. However, this practice is not standardized in the strict sense. In this book and other sources, if you encounter a statement like "average time complexity <span class="arithmatex">\(O(n)\)</span>", please understand it as <span class="arithmatex">\(\Theta(n)\)</span>.</p>
|
||||
<p class="admonition-title">Why is the <span class="arithmatex">\(\Theta\)</span> symbol rarely seen?</p>
|
||||
<p>Possibly because the <span class="arithmatex">\(O\)</span> notation is more commonly spoken, it is often used to represent the average time complexity. However, strictly speaking, this practice is not accurate. In this book and other materials, if you encounter statements like "average time complexity <span class="arithmatex">\(O(n)\)</span>", please understand it directly as <span class="arithmatex">\(\Theta(n)\)</span>.</p>
|
||||
</div>
|
||||
|
||||
<!-- Source file information -->
|
||||
|
||||
Reference in New Issue
Block a user