Update references (#335)

This commit is contained in:
Cheng Lai
2022-05-09 16:56:21 +08:00
committed by GitHub
parent 4414831c72
commit 0e75ed2506
26 changed files with 856 additions and 39 deletions

View File

@@ -66,3 +66,7 @@ K均值聚类算法是一种解决聚类问题的算法算法过程如下
本章结束语:
在系统角度,机器学习的算法无论是什么算法,涉及到高维数据任务的现都是矩阵运算实现的。
## 参考文献
:bibliography:`../references/appendix.bib`

View File

@@ -11,6 +11,11 @@
## 扩展阅读
- CUDA编程指导 [CUDA](https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html)
- 昇腾社区 [Ascend](https://gitee.com/ascend)
- MLIR应用进展 [MLIR](https://mlir.llvm.org/talks)
- CUDA编程指导 [CUDA](https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html)
- 昇腾社区 [Ascend](https://gitee.com/ascend)
- MLIR应用进展 [MLIR](https://mlir.llvm.org/talks)
## 参考文献
:bibliography:`../references/accelerator.bib`

View File

@@ -153,3 +153,7 @@ Multiplayer, MOBA.](../img/ch11/xai_kg_recommendataion.png)
此外XAI系统的部署也非常需要一个更加标准和更加统一的评估框架。为了构建标准统一的评估框架我们可能需要同时利用不同的指标相互补充。不同的指标可能适用于不同的任务和用户。统一的评价框架应具有相应的灵活性。
最后我们相信跨学科合作将是有益的。XAI的发展不仅需要计算机科学家来开发先进的算法还需要物理学家、生物学家和认知科学家来揭开人类认知的奥秘以及特定领域的专家来贡献他们的领域知识。
## 参考文献
:bibliography:`../references/explainable.bib`

View File

@@ -1,10 +0,0 @@
```eval_rst
.. only:: html
参考文献
==========
```
:bibliography:`../mlsys.bib`

View File

@@ -1,3 +1,7 @@
## 小结
在这一章我们简单介绍了强化学习的基本概念包括单智能体和多智能体强化学习算法、单节点和分布式强化学习系统等给读者对强化学习问题的基本认识。当前强化学习是一个快速发展的深度学习分支许多实际问题都有可能通过强化学习算法的进一步发展得到解决。另一方面由于强化学习问题设置的特殊性如需要与环境交互进行采样等也使得相应算法对计算系统的要求更高如何更好地平衡样本采集和策略训练过程如何均衡CPU和GPU等不同计算硬件的能力如何在大规模分布式系统上有效部署强化学习智能体等等都需要对计算机系统的设计和使用有更好的理解。
在这一章我们简单介绍了强化学习的基本概念包括单智能体和多智能体强化学习算法、单节点和分布式强化学习系统等给读者对强化学习问题的基本认识。当前强化学习是一个快速发展的深度学习分支许多实际问题都有可能通过强化学习算法的进一步发展得到解决。另一方面由于强化学习问题设置的特殊性如需要与环境交互进行采样等也使得相应算法对计算系统的要求更高如何更好地平衡样本采集和策略训练过程如何均衡CPU和GPU等不同计算硬件的能力如何在大规模分布式系统上有效部署强化学习智能体等等都需要对计算机系统的设计和使用有更好的理解。
## 参考文献
:bibliography:`../references/reinforcement.bib`

View File

@@ -1,3 +1,7 @@
## 小结
在这一章,我们简单介绍了机器人学习系统的基本概念,包括通用机器人操作系统、感知系统、规划系统和控制系统等,给读者对机器人学习问题的基本认识。当前,机器人学习是一个快速发展的人工智能分支,许多实际问题都有可能通过机器人学习算法的进一步发展得到解决。另一方面,由于机器人学习问题设置的特殊性,也使得相应系统与相关硬件的耦合程度更高、更复杂:如何更好地平衡各种传感器负载?如何在计算资源有限的情况下最大化计算效率(实时性)?等等,都需要对计算机系统的设计和使用有更好的理解。
## 参考文献
:bibliography:`../references/rlsys.bib`

View File

@@ -19,7 +19,7 @@ lang = zh
notebooks = *.md */*.md
# A list of files that will be copied to the build folder.
resources = img/ mlsyszh/ mlsys.bib
resources = img/ references/
# Files that will be skipped.
exclusions = */*_origin.md README.md info/* contrib/*md
@@ -86,24 +86,3 @@ html_logo = static/logo-with-text.png
# post_latex = ./static/post_latex/main.py
latex_logo = static/logo.png
#[deploy]
#other_file_s3urls = s3://d2l-webdata/releases/d2l-zh/d2l-zh-1.0.zip
# s3://d2l-webdata/releases/d2l-zh/d2l-zh-1.1.zip
# s3://d2l-webdata/releases/d2l-zh/d2l-zh-2.0.0.zip
#google_analytics_tracking_id = UA-96378503-2
#[colab]
#github_repo = mxnet, d2l-ai/d2l-zh-colab
# pytorch, d2l-ai/d2l-zh-pytorch-colab
# tensorflow, d2l-ai/d2l-zh-tensorflow-colab
#replace_svg_url = img, http://d2l.ai/_images
#libs = mxnet, mxnet, -U mxnet-cu101==1.7.0
# mxnet, d2l, git+https://github.com/d2l-ai/d2l-zh@release # installing d2l
# pytorch, d2l, git+https://github.com/d2l-ai/d2l-zh@release # installing d2l
# tensorflow, d2l, git+https://github.com/d2l-ai/d2l-zh@release # installing d2l

View File

@@ -33,5 +33,4 @@ chapter_rl_sys/index
:maxdepth: 1
appendix_machine_learning_introduction/index
chapter_references/index
```

35
info/refenence_guide.md Normal file
View File

@@ -0,0 +1,35 @@
# 参考文献引用方式
下面为参考文献的引用,需要注意引用时前面需要有一个空格:
1. 单篇参考文献
这篇文章参考了论文 :cite:`cnn2015`
2. 多篇参考文献可以用逗号分开
这篇文章参考了论文 :cite:`cnn2015,rnn2015`
3. 此时在对应bib中应该有如下参考文献
@inproceedings{cnn2015,
title = {CNN},
author = {xxx},
year = {2015},
keywords = {xxx}
}
@inproceedings{rnn2015,
title = {RNN},
author = {xxx},
year = {2015},
keywords = {xxx}
}
# 参考文献置于章节末尾方式
1.将章节所引用的全部参考文献生成一个chapter.pip放置于references文件夹下。
如机器人系统章节将该章节参考文献全部放在rlsys.bib并将其放在reference文件夹下。
```
参考文献目录
/references/rlsys.bib`
```
2.将对应章节参考文献引用添加至文章末尾处如机器人系统章节在summary最后加上
```
## 参考文献
:bibliography:`../references/rlsys.bib`
```

View File

@@ -92,7 +92,7 @@
:eqlabel:`linear`
公式引用使用 :eqref:`linear`
```
* 参考文献引用方式,参考文献放在mlsys.bib如需新增只需在该文件中添加即可。参考文献使用 :cite:`文献`
* 参考文献引用方式,参考文献放在references/xxx.bib如需新增只需在该文件中添加即可。参考文献使用 :cite:`文献`
需要注意的是bib里的参考文献不能有重复的。
```python
下面参考文献的引用:
@@ -101,7 +101,7 @@
2. 多篇参考文献可以用逗号分开
这篇文章参考了论文 :cite:`cnn2015,rnn2015`
此时在mlsys.bib中应该有如下参考文献
此时在对应bib中应该有如下参考文献
@inproceedings{cnn2015,
title = {CNN},
author = {xxx},

View File

@@ -0,0 +1,92 @@
@misc{2017NVIDIA,
author={NVIDIA},
title={NVIDIA Tesla V100 GPU Architecture: The World's Most Advanced Datacenter GPU},
year={2017},
howpublished = "Website",
note = {\url{http://www.nvidia.com/object/volta-architecture-whitepaper.html}}
}
@inproceedings{2021Ascend,
title={Ascend: a Scalable and Unified Architecture for Ubiquitous Deep Neural Network Computing : Industry Track Paper},
author={Liao, Heng and Tu, Jiajin and Xia, Jing and Liu, Hu and Zhou, Xiping and Yuan, Honghui and Hu, Yuxing},
booktitle={2021 IEEE International Symposium on High-Performance Computer Architecture (HPCA)},
year={2021},
pages = {789801},
doi = {10.1109/HPCA51647.2021.00071},
}
@article{ragan2013halide,
title={Halide: a language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines},
author={Ragan-Kelley, Jonathan and Barnes, Connelly and Adams, Andrew and Paris, Sylvain and Durand, Fr{\'e}do and Amarasinghe, Saman},
journal={Acm Sigplan Notices},
volume={48},
number={6},
pages={519--530},
year={2013},
publisher={ACM New York, NY, USA}
}
@article{chen2018tvm,
title={TVM: end-to-end optimization stack for deep learning},
author={Chen, Tianqi and Moreau, Thierry and Jiang, Ziheng and Shen, Haichen and Yan, Eddie Q and Wang, Leyuan and Hu, Yuwei and Ceze, Luis and Guestrin, Carlos and Krishnamurthy, Arvind},
journal={arXiv preprint arXiv:1802.04799},
volume={11},
pages={20},
year={2018},
publisher={CoRR}
}
@inproceedings{verdoolaege2010isl,
title={isl: An integer set library for the polyhedral model},
author={Verdoolaege, Sven},
booktitle={International Congress on Mathematical Software},
pages={299--302},
year={2010},
organization={Springer}
}
@inproceedings{zheng2020ansor,
title={Ansor: Generating $\{$High-Performance$\}$ Tensor Programs for Deep Learning},
author={Zheng, Lianmin and Jia, Chengfan and Sun, Minmin and Wu, Zhao and Yu, Cody Hao and Haj-Ali, Ameer and Wang, Yida and Yang, Jun and Zhuo, Danyang and Sen, Koushik and others},
booktitle={14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20)},
pages={863--879},
year={2020}
}
@article{lattner2020mlir,
title={MLIR: A compiler infrastructure for the end of Moore's law},
author={Lattner, Chris and Amini, Mehdi and Bondhugula, Uday and Cohen, Albert and Davis, Andy and Pienaar, Jacques and Riddle, River and Shpeisman, Tatiana and Vasilache, Nicolas and Zinenko, Oleksandr},
journal={arXiv preprint arXiv:2002.11054},
year={2020}
}
@inproceedings{zhao2021akg,
title={AKG: automatic kernel generation for neural processing units using polyhedral transformations},
author={Zhao, Jie and Li, Bojie and Nie, Wang and Geng, Zhen and Zhang, Renwei and Gao, Xiong and Cheng, Bin and Wu, Chen and Cheng, Yun and Li, Zheng and others},
booktitle={Proceedings of the 42nd ACM SIGPLAN International Conference on Programming Language Design and Implementation},
pages={1233--1248},
year={2021}
}
@article{vasilache2022composable,
title={Composable and Modular Code Generation in MLIR: A Structured and Retargetable Approach to Tensor Compiler Construction},
author={Vasilache, Nicolas and Zinenko, Oleksandr and Bik, Aart JC and Ravishankar, Mahesh and Raoux, Thomas and Belyaev, Alexander and Springer, Matthias and Gysi, Tobias and Caballero, Diego and Herhut, Stephan and others},
journal={arXiv preprint arXiv:2202.03293},
year={2022}
}
@inproceedings{bastoul2004code,
title={Code generation in the polyhedral model is easier than you think},
author={Bastoul, C{\'e}dric},
booktitle={Proceedings. 13th International Conference on Parallel Architecture and Compilation Techniques, 2004. PACT 2004.},
pages={7--16},
year={2004},
organization={IEEE}
}
@article{2018Modeling,
title={Modeling Deep Learning Accelerator Enabled GPUs},
author={Raihan, M. A. and Goli, N. and Aamodt, T.},
journal={arXiv e-prints arXiv:1811.08309},
year={2018}
}

103
references/appendix.bib Normal file
View File

@@ -0,0 +1,103 @@
@article{rosenblatt1958perceptron,
title={The perceptron: a probabilistic model for information storage and organization in the brain.},
author={Rosenblatt, Frank},
journal={Psychological Review},
volume={65},
number={6},
pages={386},
year={1958},
publisher={American Psychological Association}
}
@article{lecun1989backpropagation,
title={Backpropagation applied to handwritten zip code recognition},
author={LeCun, Yann and Boser, Bernhard and Denker, John S and Henderson, Donnie and Howard, Richard E and Hubbard, Wayne and Jackel, Lawrence D},
journal={Neural computation},
volume={1},
number={4},
pages={541--551},
year={1989},
publisher={MIT Press}
}
@inproceedings{krizhevsky2012imagenet,
title={Imagenet classification with deep convolutional neural networks},
author={Krizhevsky, Alex and Sutskever, Ilya and Hinton, Geoffrey E},
booktitle={Advances in Neural Information Processing Systems},
pages={1097--1105},
year={2012}
}
@inproceedings{he2016deep,
title={{Deep Residual Learning for Image Recognition}},
author={He, Kaiming and Zhang, Xiangyu and Ren, Shaoqing and Sun, Jian},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2016}
}
@article{rumelhart1986learning,
title={Learning representations by back-propagating errors},
author={Rumelhart, David E and Hinton, Geoffrey E and Williams, Ronald J},
journal={Nature},
volume={323},
number={6088},
pages={533},
year={1986},
publisher={Nature Publishing Group}
}
@article{Hochreiter1997lstm,
author = {Hochreiter, Sepp and Hochreiter, S and Schmidhuber, J{\"{u}}rgen and Schmidhuber, J},
isbn = {08997667 (ISSN)},
issn = {0899-7667},
journal = {Neural Computation},
number = {8},
pages = {1735--80},
pmid = {9377276},
title = {{Long Short-Term Memory.}},
volume = {9},
year = {1997}
}
@inproceedings{vaswani2017attention,
title={Attention is all you need},
author={Vaswani, Ashish and Shazeer, Noam and Parmar, Niki and Uszkoreit, Jakob and Jones, Llion and Gomez, Aidan N and Kaiser, {\L}ukasz and Polosukhin, Illia},
booktitle={Advances in Neural Information Processing Systems},
pages={5998--6008},
year={2017}
}
@article{lecun2015deep,
title={Deep learning},
author={LeCun, Yann and Bengio, Yoshua and Hinton, Geoffrey},
journal={Nature},
volume={521},
number={7553},
pages={436},
year={2015},
publisher={Nature Publishing Group}
}
@inproceedings{KingmaAdam2014,
title = {{Adam}: A Method for Stochastic Optimization},
author = {Kingma, Diederik and Ba, Jimmy},
booktitle = {Proceedings of the International Conference on Learning Representations (ICLR)},
year = {2014}
}
@techreport{tieleman2012rmsprop,
title={Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning},
author={Tieleman, T and Hinton, G},
year={2017},
institution={Technical Report}
}
@article{duchi2011adagrad,
title={Adaptive subgradient methods for online learning and stochastic optimization},
author={Duchi, John and Hazan, Elad and Singer, Yoram},
journal={Journal of Machine Learning Research (JMLR)},
volume={12},
number={Jul},
pages={2121--2159},
year={2011}
}

0
references/backend.bib Normal file
View File

0
references/data.bib Normal file
View File

View File

@@ -0,0 +1,59 @@
@ARTICLE{2020tkde_li,
author={Li, Xiao-Hui and Cao, Caleb Chen and Shi, Yuhan and Bai, Wei and Gao, Han and Qiu, Luyu and Wang, Cong and Gao, Yuanyuan and Zhang, Shenjia and Xue, Xun and Chen, Lei},
journal={IEEE Transactions on Knowledge and Data Engineering},
title={A Survey of Data-driven and Knowledge-aware eXplainable AI},
year={2020},
volume={},
number={},
pages={1-1},
doi={10.1109/TKDE.2020.2983930}
}
@article{erhan2009visualizing,
title={Visualizing higher-layer features of a deep network},
author={Erhan, Dumitru and Bengio, Yoshua and Courville, Aaron and Vincent, Pascal},
journal={University of Montreal},
volume={1341},
number={3},
pages={1},
year={2009}
}
@misc{kim2018interpretability,
title={Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)},
author={Been Kim and Martin Wattenberg and Justin Gilmer and Carrie Cai and James Wexler and Fernanda Viegas and Rory Sayres},
year={2018},
eprint={1711.11279},
archivePrefix={arXiv},
primaryClass={stat.ML}
}
@article{riedl2019human,
title={Human-centered artificial intelligence and machine learning},
author={Riedl, Mark O.},
journal={Human Behavior and Emerging Technologies},
volume={1},
number={1},
pages={33--36},
year={2019},
publisher={Wiley Online Library}
}
@inproceedings{10.1145/2988450.2988454,
author = {Cheng, Heng-Tze and Koc, Levent and Harmsen, Jeremiah and Shaked, Tal and Chandra, Tushar and Aradhye, Hrishi and Anderson, Glen and Corrado, Greg and Chai, Wei and Ispir, Mustafa and Anil, Rohan and Haque, Zakaria and Hong, Lichan and Jain, Vihan and Liu, Xiaobing and Shah, Hemal},
title = {Wide & Deep Learning for Recommender Systems},
year = {2016},
isbn = {9781450347952},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/2988450.2988454},
doi = {10.1145/2988450.2988454},
abstract = {Generalized linear models with nonlinear feature transformations are widely used for large-scale regression and classification problems with sparse inputs. Memorization of feature interactions through a wide set of cross-product feature transformations are effective and interpretable, while generalization requires more feature engineering effort. With less feature engineering, deep neural networks can generalize better to unseen feature combinations through low-dimensional dense embeddings learned for the sparse features. However, deep neural networks with embeddings can over-generalize and recommend less relevant items when the user-item interactions are sparse and high-rank. In this paper, we present Wide & Deep learning---jointly trained wide linear models and deep neural networks---to combine the benefits of memorization and generalization for recommender systems. We productionized and evaluated the system on Google Play, a commercial mobile app store with over one billion active users and over one million apps. Online experiment results show that Wide & Deep significantly increased app acquisitions compared with wide-only and deep-only models. We have also open-sourced our implementation in TensorFlow.},
booktitle = {Proceedings of the 1st Workshop on Deep Learning for Recommender Systems},
pages = {7-10},
numpages = {4},
keywords = {Recommender Systems, Wide & Deep Learning},
location = {Boston, MA, USA},
series = {DLRS 2016}
}

0
references/extension.bib Normal file
View File

0
references/federated.bib Normal file
View File

0
references/frontend.bib Normal file
View File

0
references/graph.bib Normal file
View File

0
references/interface.bib Normal file
View File

View File

0
references/model.bib Normal file
View File

View File

View File

@@ -0,0 +1,174 @@
@article{han2020tstarbot,
title={Tstarbot-x: An open-sourced and comprehensive study for efficient league training in starcraft ii full game},
author={Han, Lei and Xiong, Jiechao and Sun, Peng and Sun, Xinghai and Fang, Meng and Guo, Qingwei and Chen, Qiaobo and Shi, Tengfei and Yu, Hongsheng and Wu, Xipeng and others},
journal={arXiv preprint arXiv:2011.13729},
year={2020}
}
@inproceedings{wang2021scc,
title={SCC: an efficient deep reinforcement learning agent mastering the game of StarCraft II},
author={Wang, Xiangjun and Song, Junxiao and Qi, Penghui and Peng, Peng and Tang, Zhenkun and Zhang, Wei and Li, Weimin and Pi, Xiongjun and He, Jujie and Gao, Chao and others},
booktitle={International Conference on Machine Learning},
pages={10905--10915},
year={2021},
organization={PMLR}
}
@inproceedings{MLSYS2021_979d472a,
author = {Yin, Chunxing and Acun, Bilge and Wu, Carole-Jean and Liu, Xing},
booktitle = {Proceedings of Machine Learning and Systems},
editor = {A. Smola and A. Dimakis and I. Stoica},
pages = {448--462},
title = {TT-Rec: Tensor Train Compression for Deep Learning Recommendation Models},
url = {https://proceedings.mlsys.org/paper/2021/file/979d472a84804b9f647bc185a877a8b5-Paper.pdf},
volume = {3},
year = {2021}
}
@inproceedings{MLSYS2020_f7e6c855,
author = {Zhao, Weijie and Xie, Deping and Jia, Ronglai and Qian, Yulei and Ding, Ruiquan and Sun, Mingming and Li, Ping},
booktitle = {Proceedings of Machine Learning and Systems},
editor = {I. Dhillon and D. Papailiopoulos and V. Sze},
pages = {412--428},
title = {Distributed Hierarchical GPU Parameter Server for Massive Scale Deep Learning Ads Systems},
url = {https://proceedings.mlsys.org/paper/2020/file/f7e6c85504ce6e82442c770f7c8606f0-Paper.pdf},
volume = {2},
year = {2020}
}
@article{zionex,
title={Software-Hardware Co-design for Fast and Scalable Training of Deep Learning Recommendation Models},
author={Mudigere, Dheevatsa and Hao, Yuchen and Huang, Jianyu and Jia, Zhihao and Tulloch, Andrew and Sridharan, Srinivas and Liu, Xing and Ozdal, Mustafa and Nie, Jade and Park, Jongsoo and others},
journal={arXiv preprint arXiv:2104.05158},
year={2021}
}
@inproceedings{gong2020edgerec,
title={EdgeRec: Recommender System on Edge in Mobile Taobao},
author={Gong, Yu and Jiang, Ziwen and Feng, Yufei and Hu, Binbin and Zhao, Kaiqi and Liu, Qingwen and Ou, Wenwu},
booktitle={Proceedings of the 29th ACM International Conference on Information \& Knowledge Management},
pages={2477--2484},
year={2020}
}
@inproceedings{NEURIPS2020_a1d4c20b,
author = {He, Chaoyang and Annavaram, Murali and Avestimehr, Salman},
booktitle = {Advances in Neural Information Processing Systems},
editor = {H. Larochelle and M. Ranzato and R. Hadsell and M. F. Balcan and H. Lin},
pages = {14068--14080},
publisher = {Curran Associates, Inc.},
title = {Group Knowledge Transfer: Federated Learning of Large CNNs at the Edge},
url = {https://proceedings.neurips.cc/paper/2020/file/a1d4c20b182ad7137ab3606f0e3fc8a4-Paper.pdf},
volume = {33},
year = {2020}
}
@INPROCEEDINGS{9355295,
author={Xie, Minhui and Ren, Kai and Lu, Youyou and Yang, Guangxu and Xu, Qingxing and Wu, Bihai and Lin, Jiazhen and Ao, Hongbo and Xu, Wanhong and Shu, Jiwu},
booktitle={SC20: International Conference for High Performance Computing, Networking, Storage and Analysis},
title={Kraken: Memory-Efficient Continual Learning for Large-Scale Real-Time Recommendations},
year={2020},
volume={},
number={},
pages={1-17},
doi={10.1109/SC41405.2020.00025}
}
@inproceedings{MLSYS2021_ec895663,
author = {Jiang, Wenqi and He, Zhenhao and Zhang, Shuai and Preu\ss er, Thomas B. and Zeng, Kai and Feng, Liang and Zhang, Jiansong and Liu, Tongxuan and Li , Yong and Zhou, Jingren and Zhang, Ce and Alonso, Gustavo},
booktitle = {Proceedings of Machine Learning and Systems},
editor = {A. Smola and A. Dimakis and I. Stoica},
pages = {845--859},
title = {MicroRec: Efficient Recommendation Inference by Hardware and Data Structure Solutions},
url = {https://proceedings.mlsys.org/paper/2021/file/ec8956637a99787bd197eacd77acce5e-Paper.pdf},
volume = {3},
year = {2021}
}
@inproceedings{10.1145/3394486.3403059,
author = {Shi, Hao-Jun Michael and Mudigere, Dheevatsa and Naumov, Maxim and Yang, Jiyan},
title = {Compositional Embeddings Using Complementary Partitions for Memory-Efficient Recommendation Systems},
year = {2020},
isbn = {9781450379984},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3394486.3403059},
doi = {10.1145/3394486.3403059},
abstract = {},
booktitle = {Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining},
pages = {165175},
numpages = {11},
keywords = {model compression, recommendation systems, embeddings},
location = {Virtual Event, CA, USA},
series = {KDD '20}
}
@misc{ginart2021mixed,
title={Mixed Dimension Embeddings with Application to Memory-Efficient Recommendation Systems},
author={Antonio Ginart and Maxim Naumov and Dheevatsa Mudigere and Jiyan Yang and James Zou},
year={2021},
eprint={1909.11810},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
@inproceedings{10.1145/2020408.2020444,
author = {Chu, Wei and Zinkevich, Martin and Li, Lihong and Thomas, Achint and Tseng, Belle},
title = {Unbiased Online Active Learning in Data Streams},
year = {2011},
isbn = {9781450308137},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/2020408.2020444},
doi = {10.1145/2020408.2020444},
abstract = {Unlabeled samples can be intelligently selected for labeling to minimize classification error. In many real-world applications, a large number of unlabeled samples arrive in a streaming manner, making it impossible to maintain all the data in a candidate pool. In this work, we focus on binary classification problems and study selective labeling in data streams where a decision is required on each sample sequentially. We consider the unbiasedness property in the sampling process, and design optimal instrumental distributions to minimize the variance in the stochastic process. Meanwhile, Bayesian linear classifiers with weighted maximum likelihood are optimized online to estimate parameters. In empirical evaluation, we collect a data stream of user-generated comments on a commercial news portal in 30 consecutive days, and carry out offline evaluation to compare various sampling strategies, including unbiased active learning, biased variants, and random sampling. Experimental results verify the usefulness of online active learning, especially in the non-stationary situation with concept drift.},
booktitle = {Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining},
pages = {195203},
numpages = {9},
keywords = {unbiasedness, bayesian online learning, active learning, data streaming, adaptive importance sampling},
location = {San Diego, California, USA},
series = {KDD '11}
}
@inproceedings{10.1145/3267809.3267817,
author = {Tian, Huangshi and Yu, Minchen and Wang, Wei},
title = {Continuum: A Platform for Cost-Aware, Low-Latency Continual Learning},
year = {2018},
isbn = {9781450360111},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3267809.3267817},
doi = {10.1145/3267809.3267817},
abstract = {Many machine learning applications operate in dynamic environments that change over time, in which models must be continually updated to capture the recent trend in data. However, most of today's learning frameworks perform training offline, without a system support for continual model updating.In this paper, we design and implement Continuum, a general-purpose platform that streamlines the implementation and deployment of continual model updating across existing learning frameworks. In pursuit of fast data incorporation, we further propose two update policies, cost-aware and best-effort, that judiciously determine when to perform model updating, with and without accounting for the training cost (machine-time), respectively. Theoretical analysis shows that cost-aware policy is 2-competitive. We implement both polices in Continuum, and evaluate their performance through EC2 deployment and trace-driven simulations. The evaluation shows that Continuum results in reduced data incorporation latency, lower training cost, and improved model quality in a number of popular online learning applications that span multiple application domains, programming languages, and frameworks.},
booktitle = {Proceedings of the ACM Symposium on Cloud Computing},
pages = {2640},
numpages = {15},
keywords = {Competitive Analysis, Continual Learning System, Online Algorithm},
location = {Carlsbad, CA, USA},
series = {SoCC '18}
}
@inproceedings{10.1145/2648584.2648589,
author = {He, Xinran and Pan, Junfeng and Jin, Ou and Xu, Tianbing and Liu, Bo and Xu, Tao and Shi, Yanxin and Atallah, Antoine and Herbrich, Ralf and Bowers, Stuart and Candela, Joaquin Qui\~{n}onero},
title = {Practical Lessons from Predicting Clicks on Ads at Facebook},
year = {2014},
isbn = {9781450329996},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/2648584.2648589},
doi = {10.1145/2648584.2648589},
abstract = {Online advertising allows advertisers to only bid and pay for measurable user responses, such as clicks on ads. As a consequence, click prediction systems are central to most online advertising systems. With over 750 million daily active users and over 1 million active advertisers, predicting clicks on Facebook ads is a challenging machine learning task. In this paper we introduce a model which combines decision trees with logistic regression, outperforming either of these methods on its own by over 3%, an improvement with significant impact to the overall system performance. We then explore how a number of fundamental parameters impact the final prediction performance of our system. Not surprisingly, the most important thing is to have the right features: those capturing historical information about the user or ad dominate other types of features. Once we have the right features and the right model (decisions trees plus logistic regression), other factors play small roles (though even small improvements are important at scale). Picking the optimal handling for data freshness, learning rate schema and data sampling improve the model slightly, though much less than adding a high-value feature, or picking the right model to begin with.},
booktitle = {Proceedings of the Eighth International Workshop on Data Mining for Online Advertising},
pages = {19},
numpages = {9},
location = {New York, NY, USA},
series = {ADKDD'14}
}
@misc{2017NVIDIA,
author={NVIDIA},
title={NVIDIA Tesla V100 GPU Architecture: The World's Most Advanced Datacenter GPU},
year={2017},
howpublished = "Website",
note = {\url{http://www.nvidia.com/object/volta-architecture-whitepaper.html}}
}

365
references/rlsys.bib Normal file
View File

@@ -0,0 +1,365 @@
@incollection{peters2016robot,
title={Robot learning},
author={Peters, Jan and Lee, Daniel D and Kober, Jens and Nguyen-Tuong, Duy and Bagnell, J Andrew and Schaal, Stefan},
booktitle={Springer Handbook of Robotics},
pages={357--398},
year={2016},
publisher={Springer}
}
@article{saxena2014robobrain,
title={Robobrain: Large-scale knowledge engine for robots},
author={Saxena, Ashutosh and Jain, Ashesh and Sener, Ozan and Jami, Aditya and Misra, Dipendra K and Koppula, Hema S},
journal={arXiv preprint arXiv:1412.0691},
year={2014}
}
@inproceedings{zhu2017target,
title={Target-driven visual navigation in indoor scenes using deep reinforcement learning},
author={Zhu, Yuke and Mottaghi, Roozbeh and Kolve, Eric and Lim, Joseph J and Gupta, Abhinav and Fei-Fei, Li and Farhadi, Ali},
booktitle={2017 IEEE international conference on robotics and automation (ICRA)},
pages={3357--3364},
year={2017},
organization={IEEE}
}
@ARTICLE{9123682, author={Pan, Bowen and Sun, Jiankai and Leung, Ho Yin Tiga and Andonian, Alex and Zhou, Bolei}, journal={IEEE Robotics and Automation Letters}, title={Cross-View Semantic Segmentation for Sensing Surroundings}, year={2020}, volume={5}, number={3}, pages={4867-4873}, doi={10.1109/LRA.2020.3004325}}
@article{tang2018ba,
title={Ba-net: Dense bundle adjustment network},
author={Tang, Chengzhou and Tan, Ping},
journal={arXiv preprint arXiv:1806.04807},
year={2018}
}
@inproceedings{tanaka2021learning,
title={Learning To Bundle-Adjust: A Graph Network Approach to Faster Optimization of Bundle Adjustment for Vehicular SLAM},
author={Tanaka, Tetsuya and Sasagawa, Yukihiro and Okatani, Takayuki},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={6250--6259},
year={2021}
}
@inproceedings{tobin2017domain,
title={Domain randomization for transferring deep neural networks from simulation to the real world},
author={Tobin, Josh and Fong, Rachel and Ray, Alex and Schneider, Jonas and Zaremba, Wojciech and Abbeel, Pieter},
booktitle={2017 IEEE/RSJ international conference on intelligent robots and systems (IROS)},
pages={23--30},
year={2017},
organization={IEEE}
}
@inproceedings{finn2017deep,
title={Deep visual foresight for planning robot motion},
author={Finn, Chelsea and Levine, Sergey},
booktitle={2017 IEEE International Conference on Robotics and Automation (ICRA)},
pages={2786--2793},
year={2017},
organization={IEEE}
}
@article{duan2017one,
title={One-shot imitation learning},
author={Duan, Yan and Andrychowicz, Marcin and Stadie, Bradly and Jonathan Ho, OpenAI and Schneider, Jonas and Sutskever, Ilya and Abbeel, Pieter and Zaremba, Wojciech},
journal={Advances in neural information processing systems},
volume={30},
year={2017}
}
@book{koubaa2017robot,
title={Robot Operating System (ROS).},
author={Koub{\^a}a, Anis and others},
volume={1},
year={2017},
publisher={Springer}
}
@article{coleman2014reducing,
title={Reducing the barrier to entry of complex robotic software: a moveit! case study},
author={Coleman, David and Sucan, Ioan and Chitta, Sachin and Correll, Nikolaus},
journal={arXiv preprint arXiv:1404.3785},
year={2014}
}
@inproceedings{salzmann2020trajectron++,
title={Trajectron++: Dynamically-feasible trajectory forecasting with heterogeneous data},
author={Salzmann, Tim and Ivanovic, Boris and Chakravarty, Punarjay and Pavone, Marco},
booktitle={European Conference on Computer Vision},
pages={683--700},
year={2020},
organization={Springer}
}
@inproceedings{gog2021pylot,
title={Pylot: A modular platform for exploring latency-accuracy tradeoffs in autonomous vehicles},
author={Gog, Ionel and Kalra, Sukrit and Schafhalter, Peter and Wright, Matthew A and Gonzalez, Joseph E and Stoica, Ion},
booktitle={2021 IEEE International Conference on Robotics and Automation (ICRA)},
pages={8806--8813},
year={2021},
organization={IEEE}
}
@inproceedings{Dosovitskiy17,
title = { {CARLA}: {An} Open Urban Driving Simulator},
author = {Alexey Dosovitskiy and German Ros and Felipe Codevilla and Antonio Lopez and Vladlen Koltun},
booktitle = {Proceedings of the 1st Annual Conference on Robot Learning},
pages = {1--16},
year = {2017}
}
@inproceedings{10.1145/3492321.3519576,
author = {Gog, Ionel and Kalra, Sukrit and Schafhalter, Peter and Gonzalez, Joseph E. and Stoica, Ion},
title = {D3: A Dynamic Deadline-Driven Approach for Building Autonomous Vehicles},
year = {2022},
isbn = {9781450391627},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3492321.3519576},
doi = {10.1145/3492321.3519576},
abstract = {Autonomous vehicles (AVs) must drive across a variety of challenging environments that impose continuously-varying deadlines and runtime-accuracy tradeoffs on their software pipelines. A deadline-driven execution of such AV pipelines requires a new class of systems that enable the computation to maximize accuracy under dynamically-varying deadlines. Designing these systems presents interesting challenges that arise from combining ease-of-development of AV pipelines with deadline specification and enforcement mechanisms.Our work addresses these challenges through D3 (Dynamic Deadline-Driven), a novel execution model that centralizes the deadline management, and allows applications to adjust their computation by modeling missed deadlines as exceptions. Further, we design and implement ERDOS, an open-source realization of D3 for AV pipelines that exposes finegrained execution events to applications, and provides mechanisms to speculatively execute computation and enforce deadlines between an arbitrary set of events. Finally, we address the crucial lack of AV benchmarks through our state-of-the-art open-source AV pipeline, Pylot, that works seamlessly across simulators and real AVs. We evaluate the efficacy of D3 and ERDOS by driving Pylot across challenging driving scenarios spanning 50km, and observe a 68% reduction in collisions as compared to prior execution models.},
booktitle = {Proceedings of the Seventeenth European Conference on Computer Systems},
pages = {453471},
numpages = {19},
location = {Rennes, France},
series = {EuroSys '22}
}
@article{li2021metadrive,
author = {Li, Quanyi and Peng, Zhenghao and Xue, Zhenghai and Zhang, Qihang and Zhou, Bolei},
journal = {ArXiv preprint},
title = {Metadrive: Composing diverse driving scenarios for generalizable reinforcement learning},
url = {https://arxiv.org/abs/2109.12674},
volume = {abs/2109.12674},
year = {2021}
}
@article{peng2021learning,
author = {Peng, Zhenghao and Li, Quanyi and Hui, Ka Ming and Liu, Chunxiao and Zhou, Bolei},
journal = {Advances in Neural Information Processing Systems},
title = {Learning to Simulate Self-Driven Particles System with Coordinated Policy Optimization},
volume = {34},
year = {2021}
}
@inproceedings{peng2021safe,
author = {Peng, Zhenghao and Li, Quanyi and Liu, Chunxiao and Zhou, Bolei},
booktitle = {5th Annual Conference on Robot Learning},
title = {Safe Driving via Expert Guided Policy Optimization},
year = {2021}
}
@ARTICLE{8421746, author={Qin, Tong and Li, Peiliang and Shen, Shaojie}, journal={IEEE Transactions on Robotics}, title={VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator}, year={2018}, volume={34}, number={4}, pages={1004-1020}, doi={10.1109/TRO.2018.2853729}}
@article{campos2021orb,
title={Orb-slam3: An accurate open-source library for visual, visual--inertial, and multimap slam},
author={Campos, Carlos and Elvira, Richard and Rodr{\'\i}guez, Juan J G{\'o}mez and Montiel, Jos{\'e} MM and Tard{\'o}s, Juan D},
journal={IEEE Transactions on Robotics},
volume={37},
number={6},
pages={1874--1890},
year={2021},
publisher={IEEE}
}
@inproceedings{li2021efficient,
author = {Li, Quanyi and Peng, Zhenghao and Zhou, Bolei},
booktitle = {International Conference on Learning Representations},
title = {Efficient Learning of Safe Driving Policy via Human-AI Copilot Optimization},
year = {2021}
}
@article{chaplot2020learning,
title={Learning to explore using active neural slam},
author={Chaplot, Devendra Singh and Gandhi, Dhiraj and Gupta, Saurabh and Gupta, Abhinav and Salakhutdinov, Ruslan},
journal={arXiv preprint arXiv:2004.05155},
year={2020}
}
@article{teed2021droid,
title={Droid-slam: Deep visual slam for monocular, stereo, and rgb-d cameras},
author={Teed, Zachary and Deng, Jia},
journal={Advances in Neural Information Processing Systems},
volume={34},
year={2021}
}
@article{brunke2021safe,
title={Safe learning in robotics: From learning-based control to safe reinforcement learning},
author={Brunke, Lukas and Greeff, Melissa and Hall, Adam W and Yuan, Zhaocong and Zhou, Siqi and Panerati, Jacopo and Schoellig, Angela P},
journal={Annual Review of Control, Robotics, and Autonomous Systems},
volume={5},
year={2021},
publisher={Annual Reviews}
}
@InProceedings{pmlr-v144-gama21a,
title = {Graph Neural Networks for Distributed Linear-Quadratic Control},
author = {Gama, Fernando and Sojoudi, Somayeh},
booktitle = {Proceedings of the 3rd Conference on Learning for Dynamics and Control},
pages = {111--124},
year = {2021},
editor = {Jadbabaie, Ali and Lygeros, John and Pappas, George J. and A. Parrilo, Pablo and Recht, Benjamin and Tomlin, Claire J. and Zeilinger, Melanie N.},
volume = {144},
series = {Proceedings of Machine Learning Research},
month = {07 -- 08 June},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v144/gama21a/gama21a.pdf},
url = {https://proceedings.mlr.press/v144/gama21a.html},
abstract = {The linear-quadratic controller is one of the fundamental problems in control theory. The optimal solution is a linear controller that requires access to the state of the entire system at any given time. When considering a network system, this renders the optimal controller a centralized one. The interconnected nature of a network system often demands a distributed controller, where different components of the system are controlled based only on local information. Unlike the classical centralized case, obtaining the optimal distributed controller is usually an intractable problem. Thus, we adopt a graph neural network (GNN) as a parametrization of distributed controllers. GNNs are naturally local and have distributed architectures, making them well suited for learning nonlinear distributed controllers. By casting the linear-quadratic problem as a self-supervised learning problem, we are able to find the best GNN-based distributed controller. We also derive sufficient conditions for the resulting closed-loop system to be stable. We run extensive simulations to study the performance of GNN-based distributed controllers and showcase that they are a computationally efficient parametrization with scalability and transferability capabilities.}
}
@InProceedings{pmlr-v144-mehrjou21a,
title = {Neural Lyapunov Redesign},
author = {Mehrjou, Arash and Ghavamzadeh, Mohammad and Sch\"olkopf, Bernhard},
booktitle = {Proceedings of the 3rd Conference on Learning for Dynamics and Control},
pages = {459--470},
year = {2021},
editor = {Jadbabaie, Ali and Lygeros, John and Pappas, George J. and A. Parrilo, Pablo and Recht, Benjamin and Tomlin, Claire J. and Zeilinger, Melanie N.},
volume = {144},
series = {Proceedings of Machine Learning Research},
month = {07 -- 08 June},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v144/mehrjou21a/mehrjou21a.pdf},
url = {https://proceedings.mlr.press/v144/mehrjou21a.html},
abstract = {Learning controllers merely based on a performance metric has been proven effective in many physical and non-physical tasks in both control theory and reinforcement learning. However, in practice, the controller must guarantee some notion of safety to ensure that it does not harm either the agent or the environment. Stability is a crucial notion of safety, whose violation can certainly cause unsafe behaviors. Lyapunov functions are effective tools to assess stability in nonlinear dynamical systems. In this paper, we combine an improving Lyapunov function with automatic controller synthesis in an iterative fashion to obtain control policies with large safe regions. We propose a two-player collaborative algorithm that alternates between estimating a Lyapunov function and deriving a controller that gradually enlarges the stability region of the closed-loop system. We provide theoretical results on the class of systems that can be treated with the proposed algorithm and empirically evaluate the effectiveness of our method using an exemplary dynamical system.}
}
@InProceedings{pmlr-v144-zhang21b,
title = {{LEOC}: A Principled Method in Integrating Reinforcement Learning and Classical Control Theory},
author = {Zhang, Naifu and Capel, Nicholas},
booktitle = {Proceedings of the 3rd Conference on Learning for Dynamics and Control},
pages = {689--701},
year = {2021},
editor = {Jadbabaie, Ali and Lygeros, John and Pappas, George J. and A. Parrilo, Pablo and Recht, Benjamin and Tomlin, Claire J. and Zeilinger, Melanie N.},
volume = {144},
series = {Proceedings of Machine Learning Research},
month = {07 -- 08 June},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v144/zhang21b/zhang21b.pdf},
url = {https://proceedings.mlr.press/v144/zhang21b.html},
abstract = {There have been attempts in reinforcement learning to exploit a priori knowledge about the structure of the system. This paper proposes a hybrid reinforcement learning controller which dynamically interpolates a model-based linear controller and an arbitrary differentiable policy. The linear controller is designed based on local linearised model knowledge, and stabilises the system in a neighbourhood about an operating point. The coefficients of interpolation between the two controllers are determined by a scaled distance function measuring the distance between the current state and the operating point. The overall hybrid controller is proven to maintain the stability guarantee around the neighborhood of the operating point and still possess the universal function approximation property of the arbitrary non-linear policy. Learning has been done on both model-based (PILCO) and model-free (DDPG) frameworks. Simulation experiments performed in OpenAI gym demonstrate stability and robustness of the proposed hybrid controller. This paper thus introduces a principled method allowing for the direct importing of control methodology into reinforcement learning.}
}
@InProceedings{pmlr-v144-rafailov21a,
title = {Offline Reinforcement Learning from Images with Latent Space Models},
author = {Rafailov, Rafael and Yu, Tianhe and Rajeswaran, Aravind and Finn, Chelsea},
booktitle = {Proceedings of the 3rd Conference on Learning for Dynamics and Control},
pages = {1154--1168},
year = {2021},
editor = {Jadbabaie, Ali and Lygeros, John and Pappas, George J. and A. Parrilo, Pablo and Recht, Benjamin and Tomlin, Claire J. and Zeilinger, Melanie N.},
volume = {144},
series = {Proceedings of Machine Learning Research},
month = {07 -- 08 June},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v144/rafailov21a/rafailov21a.pdf},
url = {https://proceedings.mlr.press/v144/rafailov21a.html},
abstract = {Offline reinforcement learning (RL) refers to the task of learning policies from a static dataset of environment interactions. Offline RL enables extensive utilization and re-use of historical datasets, while also alleviating safety concerns associated with online exploration, thereby expanding the real-world applicability of RL. Most prior work in offline RL has focused on tasks with compact state representations. However, the ability to learn directly from rich observation spaces like images is critical for real-world applications like robotics. In this work, we build on recent advances in model-based algorithms for offline RL, and extend them to high-dimensional visual observation spaces. Model-based offline RL algorithms have achieved state of the art results in state based tasks and are minimax optimal. However, they rely crucially on the ability to quantify uncertainty in the model predictions. This is particularly challenging with image observations. To overcome this challenge, we propose to learn a latent-state dynamics model, and represent the uncertainty in the latent space. Our approach is both tractable in practice and corresponds to maximizing a lower bound of the ELBO in the unknown POMDP. Through experiments on a range of challenging image-based locomotion and robotic manipulation tasks, we find that our algorithm significantly outperforms previous offline model-free RL methods as well as state-of-the-art online visual model-based RL methods. Moreover, we also find that our approach excels on an image-based drawer closing task on a real robot using a pre-existing dataset. All results including videos can be found online at \url{https://sites.google.com/view/lompo/}.}
}
@inproceedings{chen2020transferable,
title={Transferable active grasping and real embodied dataset},
author={Chen, Xiangyu and Ye, Zelin and Sun, Jiankai and Fan, Yuda and Hu, Fang and Wang, Chenxi and Lu, Cewu},
booktitle={2020 IEEE International Conference on Robotics and Automation (ICRA)},
pages={3611--3618},
year={2020},
organization={IEEE}
}
@article{sun2021adversarial,
title={Adversarial inverse reinforcement learning with self-attention dynamics model},
author={Sun, Jiankai and Yu, Lantao and Dong, Pinqian and Lu, Bo and Zhou, Bolei},
journal={IEEE Robotics and Automation Letters},
volume={6},
number={2},
pages={1880--1886},
year={2021},
publisher={IEEE}
}
@article{huang2018navigationnet,
title={NavigationNet: A large-scale interactive indoor navigation dataset},
author={Huang, He and Shen, Yujing and Sun, Jiankai and Lu, Cewu},
journal={arXiv preprint arXiv:1808.08374},
year={2018}
}
@inproceedings{xu2019depth,
title={Depth completion from sparse lidar data with depth-normal constraints},
author={Xu, Yan and Zhu, Xinge and Shi, Jianping and Zhang, Guofeng and Bao, Hujun and Li, Hongsheng},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={2811--2820},
year={2019}
}
@inproceedings{zhu2020ssn,
title={Ssn: Shape signature networks for multi-class object detection from point clouds},
author={Zhu, Xinge and Ma, Yuexin and Wang, Tai and Xu, Yan and Shi, Jianping and Lin, Dahua},
booktitle={European Conference on Computer Vision},
pages={581--597},
year={2020},
organization={Springer}
}
@inproceedings{huang2019prior,
title={Prior guided dropout for robust visual localization in dynamic environments},
author={Huang, Zhaoyang and Xu, Yan and Shi, Jianping and Zhou, Xiaowei and Bao, Hujun and Zhang, Guofeng},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={2791--2800},
year={2019}
}
@article{xu2020selfvoxelo,
title={Selfvoxelo: Self-supervised lidar odometry with voxel-based deep neural networks},
author={Xu, Yan and Huang, Zhaoyang and Lin, Kwan-Yee and Zhu, Xinge and Shi, Jianping and Bao, Hujun and Zhang, Guofeng and Li, Hongsheng},
journal={arXiv preprint arXiv:2010.09343},
year={2020}
}
@article{huang2021life,
title={LIFE: Lighting Invariant Flow Estimation},
author={Huang, Zhaoyang and Pan, Xiaokun and Xu, Runsen and Xu, Yan and Zhang, Guofeng and Li, Hongsheng and others},
journal={arXiv preprint arXiv:2104.03097},
year={2021}
}
@inproceedings{huang2021vs,
title={VS-Net: Voting with Segmentation for Visual Localization},
author={Huang, Zhaoyang and Zhou, Han and Li, Yijin and Yang, Bangbang and Xu, Yan and Zhou, Xiaowei and Bao, Hujun and Zhang, Guofeng and Li, Hongsheng},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={6101--6111},
year={2021}
}
@article{yang2021pdnet,
title={PDNet: Towards Better One-stage Object Detection with Prediction Decoupling},
author={Yang, Li and Xu, Yan and Wang, Shaoru and Yuan, Chunfeng and Zhang, Ziqi and Li, Bing and Hu, Weiming},
journal={arXiv preprint arXiv:2104.13876},
year={2021}
}
@article{xu2022robust,
title={Robust Self-supervised LiDAR Odometry via Representative Structure Discovery and 3D Inherent Error Modeling},
author={Xu, Yan and Lin, Junyi and Shi, Jianping and Zhang, Guofeng and Wang, Xiaogang and Li, Hongsheng},
journal={IEEE Robotics and Automation Letters},
year={2022},
publisher={IEEE}
}
@article{xu2022rnnpose,
title={RNNPose: Recurrent 6-DoF Object Pose Refinement with Robust Correspondence Field Estimation and Pose Optimization},
author={Xu, Yan and Lin, Junyi and Zhang, Guofeng and Wang, Xiaogang and Li, Hongsheng},
journal={arXiv preprint arXiv:2203.12870},
year={2022}
}
@article{Sun2022SelfSupervisedTA,
title={Self-Supervised Traffic Advisors: Distributed, Multi-view Traffic Prediction for Smart Cities},
author={Jiankai Sun and Shreyas Kousik and David Fridovich-Keil and Mac Schwager},
journal={arXiv preprint},
year={2022}
}

0
references/training.bib Normal file
View File