mirror of
https://github.com/apachecn/ailearning.git
synced 2026-02-03 02:14:18 +08:00
2020-12-29 18:56:14
This commit is contained in:
@@ -2,7 +2,7 @@
|
||||
|
||||
> 原文:[https://tensorflow.google.cn/tutorials/quickstart/beginner](https://tensorflow.google.cn/tutorials/quickstart/beginner)
|
||||
|
||||
<aside class="note">**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。</aside>
|
||||
**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。
|
||||
|
||||
这是一个 [Google Colaboratory](https://colab.research.google.com/notebooks/welcome.ipynb) 笔记本文件。 Python 程序可以直接在浏览器中运行,这是学习 Tensorflow 的绝佳方式。想要学习该教程,请点击此页面顶部的按钮,在 Google Colab 中运行笔记本。
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
> 原文:[https://tensorflow.google.cn/tutorials/quickstart/advanced](https://tensorflow.google.cn/tutorials/quickstart/advanced)
|
||||
|
||||
<aside class="note">**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。</aside>
|
||||
**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。
|
||||
|
||||
这是一个 [Google Colaboratory](https://colab.research.google.com/notebooks/welcome.ipynb) 笔记本(notebook)文件。Python 程序直接在浏览器中运行——这是一种学习和使用 Tensorflow 的好方法。要学习本教程,请单击本页顶部按钮,在 Google Colab 中运行笔记本(notebook).
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
> 原文:[https://tensorflow.google.cn/tutorials/keras/text_classification](https://tensorflow.google.cn/tutorials/keras/text_classification)
|
||||
|
||||
<aside class="note">**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。</aside>
|
||||
**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。
|
||||
|
||||
此笔记本(notebook)使用评论文本将影评分为*积极(positive)*或*消极(nagetive)*两类。这是一个*二元(binary)*或者二分类问题,一种重要且应用广泛的机器学习问题。
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
> 原文:[https://tensorflow.google.cn/tutorials/keras/text_classification_with_hub](https://tensorflow.google.cn/tutorials/keras/text_classification_with_hub)
|
||||
|
||||
<aside class="note">**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。</aside>
|
||||
**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。
|
||||
|
||||
此笔记本(notebook)使用评论文本将影评分为*积极(positive)*或*消极(nagetive)*两类。这是一个*二元(binary)*或者二分类问题,一种重要且应用广泛的机器学习问题。
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
|
||||
<devsite-mathjax config="TeX-AMS-MML_SVG"></devsite-mathjax>
|
||||
|
||||
<aside class="note">**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。</aside>
|
||||
**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。
|
||||
|
||||
在 *回归 (regression)* 问题中,我们的目的是预测出如价格或概率这样连续值的输出。相对于*分类(classification)* 问题,*分类(classification)* 的目的是从一系列的分类出选择出一个分类 (如,给出一张包含苹果或橘子的图片,识别出图片中是哪种水果)。
|
||||
|
||||
|
||||
@@ -502,7 +502,7 @@ Text(0.5, 0, 'Epochs [Log Scale]')
|
||||
|
||||

|
||||
|
||||
<aside class="note">**Note:** All the above training runs used the [`callbacks.EarlyStopping`](https://tensorflow.google.cn/api_docs/python/tf/keras/callbacks/EarlyStopping) to end the training once it was clear the model was not making progress.</aside>
|
||||
**Note:** All the above training runs used the [`callbacks.EarlyStopping`](https://tensorflow.google.cn/api_docs/python/tf/keras/callbacks/EarlyStopping) to end the training once it was clear the model was not making progress.
|
||||
|
||||
### View in TensorBoard
|
||||
|
||||
@@ -534,14 +534,14 @@ display.IFrame(
|
||||
|
||||
If you want to share TensorBoard results you can upload the logs to [TensorBoard.dev](https://tensorboard.dev/) by copying the following into a code-cell.
|
||||
|
||||
<aside class="note">**Note:** This step requires a Google account.</aside>
|
||||
**Note:** This step requires a Google account.
|
||||
|
||||
```
|
||||
tensorboard dev upload --logdir {logdir}/sizes
|
||||
|
||||
```
|
||||
|
||||
<aside class="caution">**Caution:** This command does not terminate. It's designed to continuously upload the results of long-running experiments. Once your data is uploaded you need to stop it using the "interrupt execution" option in your notebook tool.</aside>
|
||||
**Caution:** This command does not terminate. It's designed to continuously upload the results of long-running experiments. Once your data is uploaded you need to stop it using the "interrupt execution" option in your notebook tool.
|
||||
|
||||
## Strategies to prevent overfitting
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
> 原文:[https://tensorflow.google.cn/tutorials/keras/save_and_load](https://tensorflow.google.cn/tutorials/keras/save_and_load)
|
||||
|
||||
<aside class="note">**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。</aside>
|
||||
**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。
|
||||
|
||||
模型可以在训练期间和训练完成后进行保存。这意味着模型可以从任意中断中恢复,并避免耗费比较长的时间在训练上。保存也意味着您可以共享您的模型,而其他人可以通过您的模型来重新创建工作。在发布研究模型和技术时,大多数机器学习从业者分享:
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
> 原文:[https://tensorflow.google.cn/tutorials/load_data/images](https://tensorflow.google.cn/tutorials/load_data/images)
|
||||
|
||||
<aside class="note">**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。</aside>
|
||||
**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。
|
||||
|
||||
本教程提供一个如何使用 [`tf.data`](https://tensorflow.google.cn/api_docs/python/tf/data) 加载图片的简单例子。
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
> 原文:[https://tensorflow.google.cn/tutorials/load_data/text](https://tensorflow.google.cn/tutorials/load_data/text)
|
||||
|
||||
<aside class="note">**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。</aside>
|
||||
**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。
|
||||
|
||||
本教程为你提供了一个如何使用 [`tf.data.TextLineDataset`](https://tensorflow.google.cn/api_docs/python/tf/data/TextLineDataset) 来加载文本文件的示例。`TextLineDataset` 通常被用来以文本文件构建数据集(原文件中的一行为一个样本) 。这适用于大多数的基于行的文本数据(例如,诗歌或错误日志) 。下面我们将使用相同作品(荷马的伊利亚特)三个不同版本的英文翻译,然后训练一个模型来通过单行文本确定译者。
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
> 原文:[https://tensorflow.google.cn/tutorials/load_data/csv](https://tensorflow.google.cn/tutorials/load_data/csv)
|
||||
|
||||
<aside class="note">**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。</aside>
|
||||
**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。
|
||||
|
||||
这篇教程通过一个示例展示了怎样将 CSV 格式的数据加载进 [`tf.data.Dataset`](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset)。
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
> 原文:[https://tensorflow.google.cn/tutorials/load_data/numpy](https://tensorflow.google.cn/tutorials/load_data/numpy)
|
||||
|
||||
<aside class="note">**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。</aside>
|
||||
**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。
|
||||
|
||||
本教程提供了将数据从 NumPy 数组加载到 [`tf.data.Dataset`](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset) 的示例 本示例从一个 `.npz` 文件中加载 MNIST 数据集。但是,本实例中 NumPy 数据的来源并不重要。
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
> 原文:[https://tensorflow.google.cn/tutorials/load_data/pandas_dataframe](https://tensorflow.google.cn/tutorials/load_data/pandas_dataframe)
|
||||
|
||||
<aside class="note">**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。</aside>
|
||||
**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。
|
||||
|
||||
本教程提供了如何将 pandas dataframes 加载到 [`tf.data.Dataset`](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset)。
|
||||
|
||||
|
||||
@@ -14,7 +14,7 @@ TensorFlow Text requires TensorFlow 2.0, and is fully compatible with eager mode
|
||||
|
||||
* * *
|
||||
|
||||
<aside class="note">**Note:** On rare occassions, this import may fail looking for the TF library. Please reset the runtime and rerun the pip install -q above.</aside>
|
||||
**Note:** On rare occassions, this import may fail looking for the TF library. Please reset the runtime and rerun the pip install -q above.
|
||||
|
||||
```
|
||||
!pip install -q tensorflow-text
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
> 原文:[https://tensorflow.google.cn/tutorials/estimator/premade](https://tensorflow.google.cn/tutorials/estimator/premade)
|
||||
|
||||
<aside class="note">**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。</aside>
|
||||
**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。
|
||||
|
||||
本教程将向您展示如何使用 Estimators 解决 Tensorflow 中的鸢尾花(Iris)分类问题。Estimator 是 Tensorflow 完整模型的高级表示,它被设计用于轻松扩展和异步训练。更多细节请参阅 [Estimators](https://tensorflow.google.cn/guide/estimator)。
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
> 原文:[https://tensorflow.google.cn/tutorials/estimator/boosted_trees](https://tensorflow.google.cn/tutorials/estimator/boosted_trees)
|
||||
|
||||
<aside class="note">**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。</aside>
|
||||
**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。
|
||||
|
||||
本教程是使用基于 [`tf.estimator`](https://tensorflow.google.cn/api_docs/python/tf/estimator) API 的决策树来训练梯度提升模型的端到端演示。提升树(Boosted Trees)模型是回归和分类问题中最受欢迎并最有效的机器学习方法之一。这是一种融合技术,它结合了几个(10 个,100 个或者甚至 1000 个)树模型的预测值。
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
|
||||
<devsite-mathjax config="TeX-AMS-MML_SVG"></devsite-mathjax>
|
||||
|
||||
<aside class="note">**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。</aside>
|
||||
**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。
|
||||
|
||||
对于梯度提升模型(Gradient Boosting model)的端到端演示(end-to-end walkthrough),请查阅[在 Tensorflow 中训练提升树(Boosted Trees)模型](https://tensorflow.google.cn/tutorials/estimator/boosted_trees)。在本教程中,您将:
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
|
||||
<devsite-mathjax config="TeX-AMS-MML_SVG"></devsite-mathjax>
|
||||
|
||||
<aside class="note">**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。</aside>
|
||||
**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。
|
||||
|
||||
## 概述
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
> 原文:[https://tensorflow.google.cn/tutorials/distribute/multi_worker_with_keras](https://tensorflow.google.cn/tutorials/distribute/multi_worker_with_keras)
|
||||
|
||||
<aside class="note">**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。</aside>
|
||||
**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。
|
||||
|
||||
## 概述
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
> 原文:[https://tensorflow.google.cn/tutorials/distribute/multi_worker_with_estimator](https://tensorflow.google.cn/tutorials/distribute/multi_worker_with_estimator)
|
||||
|
||||
<aside class="note">**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。</aside>
|
||||
**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。
|
||||
|
||||
## 概述
|
||||
|
||||
|
||||
@@ -261,7 +261,7 @@ array([[1.],
|
||||
* Replica 2: [6, 7]
|
||||
* Replica 3: []
|
||||
|
||||
<aside class="note">**Note:** The above examples only illustrate how a global batch is split on different replicas. It is not advisable to depend on the actual values that might end up on each replica as it can change depending on the implementation.</aside>
|
||||
**Note:** The above examples only illustrate how a global batch is split on different replicas. It is not advisable to depend on the actual values that might end up on each replica as it can change depending on the implementation.
|
||||
|
||||
Rebatching the dataset has a space complexity that increases linearly with the number of replicas. This means that for the multi worker training use case the input pipeline can run into OOM errors.
|
||||
|
||||
@@ -372,7 +372,7 @@ The [`tf.distribute.InputContext`](https://tensorflow.google.cn/api_docs/python/
|
||||
|
||||
[`tf.distribute`](https://tensorflow.google.cn/api_docs/python/tf/distribute) does not add a prefetch transformation at the end of the [`tf.data.Dataset`](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset) returned by the user provided input function.
|
||||
|
||||
<aside class="note">**Note:** Both [`tf.distribute.Strategy.experimental_distribute_dataset`](https://tensorflow.google.cn/api_docs/python/tf/distribute/Strategy#experimental_distribute_dataset) and `tf.distribute.Strategy.experimental_distribute_datasets_from_function` return **[`tf.distribute.DistributedDataset`](https://tensorflow.google.cn/api_docs/python/tf/distribute/DistributedDataset) instances that are not of type [`tf.data.Dataset`](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset)**. You can iterate over these instances (as shown in the Distributed Iterators section) and use the `element_spec` property.</aside>
|
||||
**Note:** Both [`tf.distribute.Strategy.experimental_distribute_dataset`](https://tensorflow.google.cn/api_docs/python/tf/distribute/Strategy#experimental_distribute_dataset) and `tf.distribute.Strategy.experimental_distribute_datasets_from_function` return **[`tf.distribute.DistributedDataset`](https://tensorflow.google.cn/api_docs/python/tf/distribute/DistributedDataset) instances that are not of type [`tf.data.Dataset`](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset)**. You can iterate over these instances (as shown in the Distributed Iterators section) and use the `element_spec` property.
|
||||
|
||||
## Distributed Iterators
|
||||
|
||||
@@ -2118,7 +2118,7 @@ d = d.map(parser_fn, num_parallel_calls=num_map_threads)
|
||||
|
||||
* The order in which the data is processed by the workers when using `tf.distribute.experimental_distribute_dataset` or `tf.distribute.experimental_distribute_datasets_from_function` is not guaranteed. This is typically required if you are using [`tf.distribute`](https://tensorflow.google.cn/api_docs/python/tf/distribute) to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. The following snippet is an example of how to order outputs.
|
||||
|
||||
<aside class="note">**Note:** [`tf.distribute.MirroredStrategy()`](https://tensorflow.google.cn/api_docs/python/tf/distribute/MirroredStrategy) is used here for the sake of convenience. We only need to reorder inputs when we are using multiple workers and [`tf.distribute.MirroredStrategy`](https://tensorflow.google.cn/api_docs/python/tf/distribute/MirroredStrategy) is used to distribute training on a single worker.</aside>
|
||||
**Note:** [`tf.distribute.MirroredStrategy()`](https://tensorflow.google.cn/api_docs/python/tf/distribute/MirroredStrategy) is used here for the sake of convenience. We only need to reorder inputs when we are using multiple workers and [`tf.distribute.MirroredStrategy`](https://tensorflow.google.cn/api_docs/python/tf/distribute/MirroredStrategy) is used to distribute training on a single worker.
|
||||
|
||||
```
|
||||
mirrored_strategy = tf.distribute.MirroredStrategy()
|
||||
@@ -2193,7 +2193,7 @@ tf.Tensor(1.0, shape=(), dtype=float32)
|
||||
|
||||
If you have a generator function that you want to use, you can create a [`tf.data.Dataset`](https://tensorflow.google.cn/api_docs/python/tf/data/Dataset) instance using the `from_generator` API.
|
||||
|
||||
<aside class="note">**Note:** This is currently not supported for [`tf.distribute.TPUStrategy`](https://tensorflow.google.cn/api_docs/python/tf/distribute/TPUStrategy).</aside>
|
||||
**Note:** This is currently not supported for [`tf.distribute.TPUStrategy`](https://tensorflow.google.cn/api_docs/python/tf/distribute/TPUStrategy).
|
||||
|
||||
```
|
||||
mirrored_strategy = tf.distribute.MirroredStrategy()
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
> 原文:[https://tensorflow.google.cn/tutorials/images/cnn](https://tensorflow.google.cn/tutorials/images/cnn)
|
||||
|
||||
<aside class="note">**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs-l10n](https://github.com/tensorflow/docs-l10n) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。</aside>
|
||||
**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs-l10n](https://github.com/tensorflow/docs-l10n) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。
|
||||
|
||||
### 导入 TensorFlow
|
||||
|
||||
|
||||
@@ -220,7 +220,7 @@ The RGB channel values are in the `[0, 255]` range. This is not ideal for a neur
|
||||
normalization_layer = layers.experimental.preprocessing.Rescaling(1./255)
|
||||
```
|
||||
|
||||
<aside class="note">**Note:** The Keras Preprocessing utilities and layers introduced in this section are currently experimental and may change.</aside>
|
||||
**Note:** The Keras Preprocessing utilities and layers introduced in this section are currently experimental and may change.
|
||||
|
||||
There are two ways to use this layer. You can apply it to the dataset by calling map:
|
||||
|
||||
@@ -239,7 +239,7 @@ print(np.min(first_image), np.max(first_image))
|
||||
|
||||
Or, you can include the layer inside your model definition, which can simplify deployment. Let's use the second approach here.
|
||||
|
||||
<aside class="note">**Note:** you previously resized images using the `image_size` argument of `image_dataset_from_directory`. If you want to include the resizing logic in your model as well, you can use the [Resizing](https://tensorflow.google.cn/api_docs/python/tf/keras/layers/experimental/preprocessing/Resizing) layer.</aside>
|
||||
**Note:** you previously resized images using the `image_size` argument of `image_dataset_from_directory`. If you want to include the resizing logic in your model as well, you can use the [Resizing](https://tensorflow.google.cn/api_docs/python/tf/keras/layers/experimental/preprocessing/Resizing) layer.
|
||||
|
||||
# Create the model
|
||||
|
||||
@@ -574,7 +574,7 @@ plt.show()
|
||||
|
||||
Finally, let's use our model to classify an image that wasn't included in the training or validation sets.
|
||||
|
||||
<aside class="note">**Note:** Data augmentation and Dropout layers are inactive at inference time.</aside>
|
||||
**Note:** Data augmentation and Dropout layers are inactive at inference time.
|
||||
|
||||
```
|
||||
sunflower_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/592px-Red_sunflower.jpg"
|
||||
|
||||
@@ -146,7 +146,7 @@ data_augmentation = tf.keras.Sequential([
|
||||
])
|
||||
```
|
||||
|
||||
<aside class="note">**Note:** These layers are active only during training, when you call `model.fit`. They are inactive when the model is used in inference mode in `model.evaulate` or `model.fit`.</aside>
|
||||
**Note:** These layers are active only during training, when you call `model.fit`. They are inactive when the model is used in inference mode in `model.evaulate` or `model.fit`.
|
||||
|
||||
Let's repeatedly apply these layers to the same image and see the result.
|
||||
|
||||
@@ -171,13 +171,13 @@ In a moment, you will download `tf.keras.applications.MobileNetV2` for use as yo
|
||||
preprocess_input = tf.keras.applications.mobilenet_v2.preprocess_input
|
||||
```
|
||||
|
||||
<aside class="note">**Note:** Alternatively, you could rescale pixel values from `[0,255]` to `[-1, 1]` using a [Rescaling](https://tensorflow.google.cn/api_docs/python/tf/keras/layers/experimental/preprocessing/Rescaling) layer.</aside>
|
||||
**Note:** Alternatively, you could rescale pixel values from `[0,255]` to `[-1, 1]` using a [Rescaling](https://tensorflow.google.cn/api_docs/python/tf/keras/layers/experimental/preprocessing/Rescaling) layer.
|
||||
|
||||
```
|
||||
rescale = tf.keras.layers.experimental.preprocessing.Rescaling(1./127.5, offset= -1)
|
||||
```
|
||||
|
||||
<aside class="note">**Note:** If using other `tf.keras.applications`, be sure to check the API doc to determine if they expect pixels in `[-1,1]` or `[0,1]`, or use the included `preprocess_input` function.</aside>
|
||||
**Note:** If using other `tf.keras.applications`, be sure to check the API doc to determine if they expect pixels in `[-1,1]` or `[0,1]`, or use the included `preprocess_input` function.
|
||||
|
||||
## Create the base model from the pre-trained convnets
|
||||
|
||||
@@ -755,7 +755,7 @@ plt.show()
|
||||
|
||||

|
||||
|
||||
<aside class="note">**Note:** If you are wondering why the validation metrics are clearly better than the training metrics, the main factor is because layers like `tf.keras.layers.BatchNormalization` and `tf.keras.layers.Dropout` affect accuracy during training. They are turned off when calculating validation loss.</aside>
|
||||
**Note:** If you are wondering why the validation metrics are clearly better than the training metrics, the main factor is because layers like `tf.keras.layers.BatchNormalization` and `tf.keras.layers.Dropout` affect accuracy during training. They are turned off when calculating validation loss.
|
||||
|
||||
To a lesser extent, it is also because training metrics report the average for an epoch, while validation metrics are evaluated after the epoch, so validation metrics see a model that has trained slightly longer.
|
||||
|
||||
@@ -765,7 +765,7 @@ In the feature extraction experiment, you were only training a few layers on top
|
||||
|
||||
One way to increase performance even further is to train (or "fine-tune") the weights of the top layers of the pre-trained model alongside the training of the classifier you added. The training process will force the weights to be tuned from generic feature maps to features associated specifically with the dataset.
|
||||
|
||||
<aside class="note">**Note:** This should only be attempted after you have trained the top-level classifier with the pre-trained model set to non-trainable. If you add a randomly initialized classifier on top of a pre-trained model and attempt to train all layers jointly, the magnitude of the gradient updates will be too large (due to the random weights from the classifier) and your pre-trained model will forget what it has learned.</aside>
|
||||
**Note:** This should only be attempted after you have trained the top-level classifier with the pre-trained model set to non-trainable. If you add a randomly initialized classifier on top of a pre-trained model and attempt to train all layers jointly, the magnitude of the gradient updates will be too large (due to the random weights from the classifier) and your pre-trained model will forget what it has learned.
|
||||
|
||||
Also, you should try to fine-tune a small number of top layers rather than the whole MobileNet model. In most convolutional networks, the higher up a layer is, the more specialized it is. The first few layers learn very simple and generic features that generalize to almost all types of images. As you go higher up, the features are increasingly more specific to the dataset on which the model was trained. The goal of fine-tuning is to adapt these specialized features to work with the new dataset, rather than overwrite the generic learning.
|
||||
|
||||
|
||||
@@ -80,7 +80,7 @@ _ = plt.title(get_label_name(label))
|
||||
|
||||
## Use Keras preprocessing layers
|
||||
|
||||
<aside class="note">**Note:** The [Keras Preprocesing Layers](https://tensorflow.google.cn/api_docs/python/tf/keras/layers/experimental/preprocessing) introduced in this section are currently experimental.</aside>
|
||||
**Note:** The [Keras Preprocesing Layers](https://tensorflow.google.cn/api_docs/python/tf/keras/layers/experimental/preprocessing) introduced in this section are currently experimental.
|
||||
|
||||
### Resizing and rescaling
|
||||
|
||||
@@ -95,7 +95,7 @@ resize_and_rescale = tf.keras.Sequential([
|
||||
])
|
||||
```
|
||||
|
||||
<aside class="note">**Note:** the rescaling layer above standardizes pixel values to `[0,1]`. If instead you wanted `[-1,1]`, you would write `Rescaling(1./127.5, offset=-1)`.</aside>
|
||||
**Note:** the rescaling layer above standardizes pixel values to `[0,1]`. If instead you wanted `[-1,1]`, you would write `Rescaling(1./127.5, offset=-1)`.
|
||||
|
||||
You can see the result of applying these layers to an image.
|
||||
|
||||
@@ -170,7 +170,7 @@ There are two important points to be aware of in this case:
|
||||
|
||||
* When you export your model using `model.save`, the preprocessing layers will be saved along with the rest of your model. If you later deploy this model, it will automatically standardize images (according to the configuration of your layers). This can save you from the effort of having to reimplement that logic server-side.
|
||||
|
||||
<aside class="note">**Note:** Data augmentation is inactive at test time so input images will only be augmented during calls to `model.fit` (not `model.evaluate` or `model.predict`).</aside>
|
||||
**Note:** Data augmentation is inactive at test time so input images will only be augmented during calls to `model.fit` (not `model.evaluate` or `model.predict`).
|
||||
|
||||
#### Option 2: Apply the preprocessing layers to your dataset
|
||||
|
||||
@@ -190,7 +190,7 @@ You can find an example of the first option in the [image classification](https:
|
||||
|
||||
Configure the train, validation, and test datasets with the preprocessing layers you created above. You will also configure the datasets for performance, using parallel reads and buffered prefetching to yield batches from disk without I/O become blocking. You can learn more dataset performance in the [Better performance with the tf.data API](https://tensorflow.google.cn/guide/data_performance) guide.
|
||||
|
||||
<aside class="note">**Note:** data augmentation should only be applied to the training set.</aside>
|
||||
**Note:** data augmentation should only be applied to the training set.
|
||||
|
||||
```
|
||||
batch_size = 32
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
> 原文:[https://tensorflow.google.cn/tutorials/images/segmentation](https://tensorflow.google.cn/tutorials/images/segmentation)
|
||||
|
||||
<aside class="note">**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。</aside>
|
||||
**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。
|
||||
|
||||
这篇教程将重点讨论图像分割任务,使用的是改进版的 [U-Net](https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/)。
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
> 原文:[https://tensorflow.google.cn/tutorials/text/nmt_with_attention](https://tensorflow.google.cn/tutorials/text/nmt_with_attention)
|
||||
|
||||
<aside class="note">**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。</aside>
|
||||
**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。
|
||||
|
||||
此笔记本训练一个将西班牙语翻译为英语的序列到序列(sequence to sequence,简写为 seq2seq)模型。此例子难度较高,需要对序列到序列模型的知识有一定了解。
|
||||
|
||||
|
||||
@@ -47,7 +47,7 @@ import pickle
|
||||
|
||||
You will use the [MS-COCO dataset](http://cocodataset.org/#home) to train our model. The dataset contains over 82,000 images, each of which has at least 5 different caption annotations. The code below downloads and extracts the dataset automatically.
|
||||
|
||||
<aside class="caution">**Caution:** large download ahead**. You'll use the training set, which is a 13GB file.</aside>
|
||||
**Caution:** large download ahead**. You'll use the training set, which is a 13GB file.
|
||||
|
||||
```
|
||||
# Download caption annotation files
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
|
||||
<devsite-mathjax config="TeX-AMS-MML_SVG"></devsite-mathjax>
|
||||
|
||||
<aside class="note">**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)</aside>
|
||||
**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)
|
||||
|
||||
本教程训练了一个 [Transformer 模型](https://arxiv.org/abs/1706.03762) 用于将葡萄牙语翻译成英语。这是一个高级示例,假定您具备[文本生成(text generation)](/tutorials/text/text_generation)和 [注意力机制(attention)](/tutorials/text/nmt_with_attention) 的知识。
|
||||
|
||||
|
||||
@@ -456,7 +456,7 @@ checkpoint.restore(
|
||||
|
||||
```
|
||||
|
||||
<aside class="note">**Note:** The pretrained `TransformerEncoder` is also available on [TensorFlow Hub](https://tensorflow.org/hub). See the [Hub appendix](#hub_bert) for details.</aside>
|
||||
**Note:** The pretrained `TransformerEncoder` is also available on [TensorFlow Hub](https://tensorflow.org/hub). See the [Hub appendix](#hub_bert) for details.
|
||||
|
||||
### Set up the optimizer
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
> 原文:[https://tensorflow.google.cn/tutorials/structured_data/feature_columns](https://tensorflow.google.cn/tutorials/structured_data/feature_columns)
|
||||
|
||||
<aside class="note">**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。</aside>
|
||||
**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。
|
||||
|
||||
本教程演示了如何对结构化数据进行分类(例如,CSV 中的表格数据)。我们将使用 [Keras](https://tensorflow.google.cn/guide/keras) 来定义模型,将[特征列(feature columns)](https://tensorflow.google.cn/guide/feature_columns) 作为从 CSV 中的列(columns)映射到用于训练模型的特征(features)的桥梁。本教程包括了以下内容的完整代码:
|
||||
|
||||
|
||||
@@ -48,7 +48,7 @@ colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
|
||||
|
||||
Pandas is a Python library with many helpful utilities for loading and working with structured data and can be used to download CSVs into a dataframe.
|
||||
|
||||
<aside class="note">**Note:** This dataset has been collected and analysed during a research collaboration of Worldline and the [Machine Learning Group](http://mlg.ulb.ac.be) of ULB (Université Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available [here](https://www.researchgate.net/project/Fraud-detection-5) and the page of the [DefeatFraud](https://mlg.ulb.ac.be/wordpress/portfolio_page/defeatfraud-assessment-and-validation-of-deep-feature-engineering-and-learning-solutions-for-fraud-detection/) project</aside>
|
||||
**Note:** This dataset has been collected and analysed during a research collaboration of Worldline and the [Machine Learning Group](http://mlg.ulb.ac.be) of ULB (Université Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available [here](https://www.researchgate.net/project/Fraud-detection-5) and the page of the [DefeatFraud](https://mlg.ulb.ac.be/wordpress/portfolio_page/defeatfraud-assessment-and-validation-of-deep-feature-engineering-and-learning-solutions-for-fraud-detection/) project
|
||||
|
||||
```
|
||||
file = tf.keras.utils
|
||||
@@ -119,7 +119,7 @@ test_features = np.array(test_df)
|
||||
|
||||
Normalize the input features using the sklearn StandardScaler. This will set the mean to 0 and standard deviation to 1.
|
||||
|
||||
<aside class="note">**Note:** The `StandardScaler` is only fit using the `train_features` to be sure the model is not peeking at the validation or test sets.</aside>
|
||||
**Note:** The `StandardScaler` is only fit using the `train_features` to be sure the model is not peeking at the validation or test sets.
|
||||
|
||||
```
|
||||
scaler = StandardScaler()
|
||||
@@ -151,7 +151,7 @@ Test features shape: (56962, 29)
|
||||
|
||||
```
|
||||
|
||||
<aside class="caution">**Caution:** If you want to deploy a model, it's critical that you preserve the preprocessing calculations. The easiest way to implement them as layers, and attach them to your model before export.</aside>
|
||||
**Caution:** If you want to deploy a model, it's critical that you preserve the preprocessing calculations. The easiest way to implement them as layers, and attach them to your model before export.
|
||||
|
||||
### Look at the data distribution
|
||||
|
||||
@@ -234,7 +234,7 @@ Notice that there are a few metrics defined above that can be computed by the mo
|
||||
* **Recall** is the percentage of **actual** positives that were correctly classified > $\frac{\text{true positives} }{\text{true positives + false negatives} }$
|
||||
* **AUC** refers to the Area Under the Curve of a Receiver Operating Characteristic curve (ROC-AUC). This metric is equal to the probability that a classifier will rank a random positive sample higher than a random negative sample.
|
||||
|
||||
<aside class="note">**Note:** Accuracy is not a helpful metric for this task. You can 99.8%+ accuracy on this task by predicting False all the time.</aside>
|
||||
**Note:** Accuracy is not a helpful metric for this task. You can 99.8%+ accuracy on this task by predicting False all the time.
|
||||
|
||||
Read more:
|
||||
|
||||
@@ -249,7 +249,7 @@ Read more:
|
||||
|
||||
Now create and train your model using the function that was defined earlier. Notice that the model is fit using a larger than default batch size of 2048, this is important to ensure that each batch has a decent chance of containing a few positive samples. If the batch size was too small, they would likely have no fraudulent transactions to learn from.
|
||||
|
||||
<aside class="note">**Note:** this model will not handle the class imbalance well. You will improve it later in this tutorial.</aside>
|
||||
**Note:** this model will not handle the class imbalance well. You will improve it later in this tutorial.
|
||||
|
||||
```
|
||||
EPOCHS = 100
|
||||
@@ -575,7 +575,7 @@ plot_metrics(baseline_history)
|
||||
|
||||

|
||||
|
||||
<aside class="note">**Note:** That the validation curve generally performs better than the training curve. This is mainly caused by the fact that the dropout layer is not active when evaluating the model.</aside>
|
||||
**Note:** That the validation curve generally performs better than the training curve. This is mainly caused by the fact that the dropout layer is not active when evaluating the model.
|
||||
|
||||
### Evaluate metrics
|
||||
|
||||
@@ -698,7 +698,7 @@ Weight for class 1: 289.44
|
||||
|
||||
Now try re-training and evaluating the model with class weights to see how that affects the predictions.
|
||||
|
||||
<aside class="note">**Note:** Using `class_weights` changes the range of the loss. This may affect the stability of the training depending on the optimizer. Optimizers whose step size is dependent on the magnitude of the gradient, like [`optimizers.SGD`](https://tensorflow.google.cn/api_docs/python/tf/keras/optimizers/SGD), may fail. The optimizer used here, [`optimizers.Adam`](https://tensorflow.google.cn/api_docs/python/tf/keras/optimizers/Adam), is unaffected by the scaling change. Also note that because of the weighting, the total losses are not comparable between the two models.</aside>
|
||||
**Note:** Using `class_weights` changes the range of the loss. This may affect the stability of the training depending on the optimizer. Optimizers whose step size is dependent on the magnitude of the gradient, like [`optimizers.SGD`](https://tensorflow.google.cn/api_docs/python/tf/keras/optimizers/SGD), may fail. The optimizer used here, [`optimizers.Adam`](https://tensorflow.google.cn/api_docs/python/tf/keras/optimizers/Adam), is unaffected by the scaling change. Also note that because of the weighting, the total losses are not comparable between the two models.
|
||||
|
||||
```
|
||||
weighted_model = make_model()
|
||||
@@ -1064,7 +1064,7 @@ resampled_steps_per_epoch
|
||||
|
||||
Now try training the model with the resampled data set instead of using class weights to see how these methods compare.
|
||||
|
||||
<aside class="note">**Note:** Because the data was balanced by replicating the positive examples, the total dataset size is larger, and each epoch runs for more training steps.</aside>
|
||||
**Note:** Because the data was balanced by replicating the positive examples, the total dataset size is larger, and each epoch runs for more training steps.
|
||||
|
||||
```
|
||||
resampled_model = make_model()
|
||||
|
||||
@@ -1147,7 +1147,7 @@ lstm_model = tf.keras.models.Sequential([
|
||||
|
||||
With `return_sequences=True` the model can be trained on 24h of data at a time.
|
||||
|
||||
<aside class="note">**Note:** This will give a pessimistic view of the model's performance. On the first timestep the model has no access to previous steps, and so can't do any better than the simple `linear` and `dense` models shown earlier.</aside>
|
||||
**Note:** This will give a pessimistic view of the model's performance. On the first timestep the model has no access to previous steps, and so can't do any better than the simple `linear` and `dense` models shown earlier.
|
||||
|
||||
```
|
||||
print('Input shape:', wide_window.example[0].shape)
|
||||
@@ -1726,7 +1726,7 @@ With the `RNN`'s state, and an initial prediction you can now continue iterating
|
||||
|
||||
The simplest approach to collecting the output predictions is to use a python list, and [`tf.stack`](https://tensorflow.google.cn/api_docs/python/tf/stack) after the loop.
|
||||
|
||||
<aside class="note">**Note:** Stacking a python list like this only works with eager-execution, using [`Model.compile(..., run_eagerly=True)`](https://tensorflow.google.cn/api_docs/python/tf/keras/Model#compile) for training, or with a fixed length output. For a dynamic output length you would need to use a [`tf.TensorArray`](https://tensorflow.google.cn/api_docs/python/tf/TensorArray) instead of a python list, and [`tf.range`](https://tensorflow.google.cn/api_docs/python/tf/range) instead of the python `range`.</aside>
|
||||
**Note:** Stacking a python list like this only works with eager-execution, using [`Model.compile(..., run_eagerly=True)`](https://tensorflow.google.cn/api_docs/python/tf/keras/Model#compile) for training, or with a fixed length output. For a dynamic output length you would need to use a [`tf.TensorArray`](https://tensorflow.google.cn/api_docs/python/tf/TensorArray) instead of a python list, and [`tf.range`](https://tensorflow.google.cn/api_docs/python/tf/range) instead of the python `range`.
|
||||
|
||||
```
|
||||
def call(self, inputs, training=None):
|
||||
|
||||
@@ -4,11 +4,11 @@
|
||||
|
||||
<devsite-mathjax config="TeX-AMS-MML_SVG"></devsite-mathjax>
|
||||
|
||||
<aside class="note">**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。</aside>
|
||||
**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。
|
||||
|
||||
本教程使用深度学习来用其他图像的风格创造一个图像(曾经你是否希望可以像毕加索或梵高一样绘画?)。 这被称为*神经风格迁移*,该技术概述于 [A Neural Algorithm of Artistic Style](https://arxiv.org/abs/1508.06576) (Gatys et al.).
|
||||
|
||||
<aside class="note">**Note:** 本教程演示了原始的风格迁移算法。它将图像内容优化为特定样式。最新的一些方法训练模型以直接生成风格化图像(类似于 [cyclegan](/tutorials/generative/cyclegan))。原始的这种方法要快得多(高达 1000 倍)。[TensorFlow Hub](https://tensorflow.google.cn/hub) 和 [TensorFlow Lite](https://tensorflow.google.cn/lite/models/style_transfer/overview) 中提供了预训练的[任意图像风格化模块](https://colab.sandbox.google.com/github/tensorflow/hub/blob/master/examples/colab/tf2_arbitrary_image_stylization.ipynb)。</aside>
|
||||
**Note:** 本教程演示了原始的风格迁移算法。它将图像内容优化为特定样式。最新的一些方法训练模型以直接生成风格化图像(类似于 [cyclegan](/tutorials/generative/cyclegan))。原始的这种方法要快得多(高达 1000 倍)。[TensorFlow Hub](https://tensorflow.google.cn/hub) 和 [TensorFlow Lite](https://tensorflow.google.cn/lite/models/style_transfer/overview) 中提供了预训练的[任意图像风格化模块](https://colab.sandbox.google.com/github/tensorflow/hub/blob/master/examples/colab/tf2_arbitrary_image_stylization.ipynb)。
|
||||
|
||||
神经风格迁移是一种优化技术,用于将两个图像——一个*内容*图像和一个*风格参考*图像(如著名画家的一个作品)——混合在一起,使输出的图像看起来像内容图像, 但是用了风格参考图像的风格。
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
> 原文:[https://tensorflow.google.cn/tutorials/generative/dcgan](https://tensorflow.google.cn/tutorials/generative/dcgan)
|
||||
|
||||
<aside class="note">**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。</aside>
|
||||
**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。
|
||||
|
||||
本教程演示了如何使用[深度卷积生成对抗网络](https://arxiv.org/pdf/1511.06434.pdf)(DCGAN)生成手写数字图片。该代码是使用 [Keras Sequential API](https://tensorflow.google.cn/guide/keras) 与 [`tf.GradientTape`](https://tensorflow.google.cn/api_docs/python/tf/GradientTape) 训练循环编写的。
|
||||
|
||||
|
||||
@@ -482,7 +482,7 @@ Write a function to plot some images during training.
|
||||
* The generator will then translate the input image into the output.
|
||||
* Last step is to plot the predictions and **voila!**
|
||||
|
||||
<aside class="note">**Note:** The `training=True` is intentional here since we want the batch statistics while running the model on the test dataset. If we use training=False, we will get the accumulated statistics learned from the training dataset (which we don't want)</aside>
|
||||
**Note:** The `training=True` is intentional here since we want the batch statistics while running the model on the test dataset. If we use training=False, we will get the accumulated statistics learned from the training dataset (which we don't want)
|
||||
|
||||
```
|
||||
def generate_images(model, test_input, tar):
|
||||
@@ -622,14 +622,14 @@ Time taken for epoch 150 is 16.14578342437744 sec
|
||||
|
||||
If you want to share the TensorBoard results *publicly* you can upload the logs to [TensorBoard.dev](https://tensorboard.dev/) by copying the following into a code-cell.
|
||||
|
||||
<aside class="note">**Note:** This requires a Google account.</aside>
|
||||
**Note:** This requires a Google account.
|
||||
|
||||
```
|
||||
tensorboard dev upload --logdir {log_dir}
|
||||
|
||||
```
|
||||
|
||||
<aside class="caution">**Caution:** This command does not terminate. It's designed to continuously upload the results of long-running experiments. Once your data is uploaded you need to stop it using the "interrupt execution" option in your notebook tool.</aside>
|
||||
**Caution:** This command does not terminate. It's designed to continuously upload the results of long-running experiments. Once your data is uploaded you need to stop it using the "interrupt execution" option in your notebook tool.
|
||||
|
||||
You can view the [results of a previous run](https://tensorboard.dev/experiment/lZ0C6FONROaUMfjYkVyJqw) of this notebook on [TensorBoard.dev](https://tensorboard.dev/).
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
|
||||
<devsite-mathjax config="TeX-AMS-MML_SVG"></devsite-mathjax>
|
||||
|
||||
<aside class="note">**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。</aside>
|
||||
**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。
|
||||
|
||||
本笔记演示了使用条件 GAN 进行的未配对图像到图像转换,如[使用循环一致的对抗网络进行未配对图像到图像转换](https://arxiv.org/abs/1703.10593) 中所述,也称之为 CycleGAN。论文提出了一种可以捕捉图像域特征并找出如何将这些特征转换为另一个图像域的方法,而无需任何成对的训练样本。
|
||||
|
||||
|
||||
@@ -353,7 +353,7 @@ plt.show()
|
||||
|
||||
In this example, you will train an autoencoder to detect anomalies on the [ECG5000 dataset](http://www.timeseriesclassification.com/description.php?Dataset=ECG5000). This dataset contains 5,000 [Electrocardiograms](https://en.wikipedia.org/wiki/Electrocardiography), each with 140 data points. You will use a simplified version of the dataset, where each example has been labeled either `0` (corresponding to an abnormal rhythm), or `1` (corresponding to a normal rhythm). You are interested in identifying the abnormal rhythms.
|
||||
|
||||
<aside class="note">**Note:** This is a labeled dataset, so you could phrase this as a supervised learning problem. The goal of this example is to illustrate anomaly detection concepts you can apply to larger datasets, where you do not have labels available (for example, if you had many thousands of normal rhythms, and only a small number of abnormal rhythms).</aside>
|
||||
**Note:** This is a labeled dataset, so you could phrase this as a supervised learning problem. The goal of this example is to illustrate anomaly detection concepts you can apply to larger datasets, where you do not have labels available (for example, if you had many thousands of normal rhythms, and only a small number of abnormal rhythms).
|
||||
|
||||
How will you detect anomalies using an autoencoder? Recall that an autoencoder is trained to minimize reconstruction error. You will train an autoencoder on the normal rhythms only, then use it to reconstruct all the data. Our hypothesis is that the abnormal rhythms will have higher reconstruction error. You will then classify a rhythm as an anomaly if the reconstruction error surpasses a fixed threshold.
|
||||
|
||||
@@ -585,7 +585,7 @@ Threshold: 0.033377893
|
||||
|
||||
```
|
||||
|
||||
<aside class="note">**Note:** There are other strategies you could use to select a threshold value above which test examples should be classified as anomalous, the correct approach will depend on your dataset. You can learn more with the links at the end of this tutorial.</aside>
|
||||
**Note:** There are other strategies you could use to select a threshold value above which test examples should be classified as anomalous, the correct approach will depend on your dataset. You can learn more with the links at the end of this tutorial.
|
||||
|
||||
If you examine the recontruction error for the anomalous examples in the test set, you'll notice most have greater reconstruction error than the threshold. By varing the threshold, you can adjust the [precision](https://developers.google.cn/machine-learning/glossary#precision) and [recall](https://developers.google.cn/machine-learning/glossary#recall) of your classifier.
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
|
||||
<devsite-mathjax config="TeX-AMS-MML_SVG"></devsite-mathjax>
|
||||
|
||||
<aside class="note">**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。</aside>
|
||||
**Note:** 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 [官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到 [tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入 [docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。
|
||||
|
||||

|
||||
|
||||
|
||||
Reference in New Issue
Block a user