Review of some contemporary trends in machine learning technology
https://doi.org/10.26425/2658-3445-2018-1-26-35
Abstract
About the Author
M. KoroteevRussian Federation
References
1. Caruana R. (1998), Multitask learning springer, Boston, MA, P. 95–133. Dai J., He K, Sun J. (2014), “Instance-aware semantic segmentation via multi-task network cascades”, available at: https://www.cv-foundation.org/openaccess/content_cvpr_2016/app/S14-02.pdf (accessed September 6, 2018).
2. Dong D., Wu H., He W., Yu D. and Wang H. (2015), “Multi-task learning for multiple language translation”, Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing, July 26–31, pp. 1723–1732.
3. Ghosn J. & Bengio Y. (1997), “Multi-task learning for stock selection”, available at: http://papers.nips.cc/paper/1221- multi-task-learning-for-stock-selection.pdf (accessed September 6, 2018).
4. Kaiser L. et al. (2017), “One model to learn them all”, available at: https://arxiv.org/abs/1706.05137 (accessed September 6, 2018).
5. Khokhlova D. (2016), “Neural networks boom: Who makes neural networks, why they are needed and how much money can bring”, June 2, [“Bum nejrosetej: Kto delaet nejronnye seti, zachem oni nuzhny i skol'ko deneg mogut prinosit'”, 02.06.2016], available at: https://vc.ru/16843-neural-networks (accessed September 6, 2018) (accessed September 6, 2018).
6. Koh P.W. & Liang P. (2017), “Understanding black-box predictions via influence functions”, Proceedings of the 34th international conference on machine learning, PMLR, vol. 70, pp. 1885–1894.
7. Le Q. & Zoph B. (2017), “Google using machine learning to explore neural network architecture”, Research blog, Wednesday, May 17, available at: https://research.googleblog.com/2017/05/using-machine-learning-to-explore.html (accessed September 6, 2018).
8. Li S., Liu Z.-Q. & Chan A. B. (2014), “Heterogeneous multi-task learning for human pose estimation with deep convolutional neural network”, available at: https://www.cv-foundation.org/openaccess/content_cvpr_workshops_2014/ W15/papers/LI_Heterogeneous_Multi-task_Learning_2014_CVPR_paper.pdf (accessed September 6, 2018).
9. Luong M.-T. et al. (2016), “Multi-task sequence to sequence learning”, ICLR, available at: https://arxiv.org/abs/1511.06114 (accessed September 6, 2018).
10. Misra I., Shrivastava A., Gupta A. & Hebert M. (2016), “Cross-stitch Networks for Multi-task Learning”, available at: https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Misra_Cross-Stitch_Networks_for_CVPR_2016_paper.pdf.(accessed September 6, 2018).
11. Molnar C. (2018), “Interpretable machine learning”, available at: https://christophm.github.io/interpretable-ml-book/ (accessed September 6, 2018).
12. Olson R. (2016), “TPOT: A Python tool for automating data science”, available at: https://www.kdnuggets.com/2016/05/tpot-python-automating-data-science.html/2 (accessed September 6, 2018).
13. Olson R.S. & Moore J.H. (2016), “TPOT: A tree-based pipeline optimization tool for automating machine learning”, pp. 66–74.
14. Paredes B.R. et al. (2012), “Exploiting unrelated tasks in multi-task learning”, PMLR, vol. 22, pp. 951–959.
15. Thornton C. et al. (2013), “Auto-WEKA: combined selection and hyperparameter optimization of classification algorithms ACM”, pp. 847–855.
16. Yao X. (1999). “Evolving artificial neural networks”, Proceedings of the IEEE, no 9 (87), pp. 1423–1447,
17. Zhang Y. & Yang Q. (2017), “A survey on multi-task learning”, available at: https://arxiv.org/abs/1707.08114 (accessed September 6, 2018).
18. Zoph B. & Le Q.V. (2016), “Neural architecture search with reinforcement learning”, available at: https://arxiv.org/ abs/1611.01578 (accessed September 6, 2018).
Review
For citations:
Koroteev M. Review of some contemporary trends in machine learning technology. E-Management. 2018;(1):26-35. (In Russ.) https://doi.org/10.26425/2658-3445-2018-1-26-35