{"id":13790189,"url":"https://github.com/sunshineatnoon/Paper-Collection","last_synced_at":"2025-05-12T07:31:42.387Z","repository":{"id":95776562,"uuid":"50411671","full_name":"sunshineatnoon/Paper-Collection","owner":"sunshineatnoon","description":"A track of papers I read","archived":false,"fork":false,"pushed_at":"2019-06-13T20:46:27.000Z","size":28617,"stargazers_count":185,"open_issues_count":0,"forks_count":66,"subscribers_count":17,"default_branch":"master","last_synced_at":"2024-11-18T04:35:48.394Z","etag":null,"topics":["computer-vision","deep-learning","papers"],"latest_commit_sha":null,"homepage":"","language":null,"has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/sunshineatnoon.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null}},"created_at":"2016-01-26T07:30:54.000Z","updated_at":"2024-03-11T19:03:56.000Z","dependencies_parsed_at":null,"dependency_job_id":"ffbb8ff7-6124-4807-aef6-325a07c9e647","html_url":"https://github.com/sunshineatnoon/Paper-Collection","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sunshineatnoon%2FPaper-Collection","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sunshineatnoon%2FPaper-Collection/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sunshineatnoon%2FPaper-Collection/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sunshineatnoon%2FPaper-Collection/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/sunshineatnoon","download_url":"https://codeload.github.com/sunshineatnoon/Paper-Collection/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253695164,"owners_count":21948824,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["computer-vision","deep-learning","papers"],"created_at":"2024-08-03T22:00:38.526Z","updated_at":"2025-05-12T07:31:37.369Z","avatar_url":"https://github.com/sunshineatnoon.png","language":null,"readme":"# Paper Collection - A List of Computer Vision Papers and Notes\n- [Image Classification](#image-classification)\n- [Popular Module](#popular-module)\n- [Object Detection in Image](#object-detection-in-image)\n- [Image Caption](#image-caption)\n- [Image Generations](#image-generations)\n- [Image and Language](#image-and-language)\n- [Activation Maximization](#activation-maximization)\n- [Style Transfer](#style-transfer)\n- [Super Resolution](#super-resolution)\n- [Image Segmentation](#image-segmentation)\n- [Open Courses](#open-courses)\n- [Online Books](#online-books)\n\n\n### Image Classification:\nNetwork in Network [[Paper]](https://arxiv.org/abs/1312.4400) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/NIN.md) [[Torch Code]](https://github.com/szagoruyko/cifar.torch/blob/master/models/nin.lua)\n   * Lin, Min, Qiang Chen, and Shuicheng Yan. \"Network in network.\" arXiv preprint arXiv:1312.4400 (2013).\n\nVGG [[Paper]](https://arxiv.org/abs/1409.1556) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/VGG.md) [[Torch Code]](https://github.com/szagoruyko/cifar.torch/blob/master/models/vgg_bn_drop.lua)\n   * Simonyan, Karen, and Andrew Zisserman. \"Very deep convolutional networks for large-scale image recognition.\" arXiv preprint arXiv:1409.1556 (2014).\n\nGoogleNet [[Paper]](http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Szegedy_Going_Deeper_With_2015_CVPR_paper.pdf) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/GoogleNet.md) [[Torch Code]](https://github.com/soumith/inception.torch/blob/master/googlenet.lua)\n   * Szegedy, Christian, et al. \"Going deeper with convolutions.\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015.\n\nResNet [[Paper]](https://arxiv.org/pdf/1512.03385.pdf) [[Note]]() [[Torch Code]](https://github.com/facebook/fb.resnet.torch)\n   * He, Kaiming, et al. \"Deep residual learning for image recognition.\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016.\n\n### Popular Module\nDropout [[Paper]](http://www.jmlr.org/papers/volume15/srivastava14a.old/source/srivastava14a.pdf) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/Dropout.md)\n* Srivastava, Nitish, et al. \"Dropout: a simple way to prevent neural networks from overfitting.\" Journal of Machine Learning Research 15.1 (2014): 1929-1958.\n\nBatch Normalization [[Paper]](https://arxiv.org/abs/1502.03167) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/BN.md)\n* Ioffe S, Szegedy C. Batch normalization: Accelerating deep network training by reducing internal covariate shift[J]. arXiv preprint arXiv:1502.03167, 2015.\n\n### Object Detection in Image\nRCNN [[Paper]](http://arxiv.org/abs/1311.2524) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/RCNN.md) [[Code]](https://github.com/rbgirshick/rcnn)\n   * Ross Girshick, Jeff Donahue, Trevor Darrell, Jitendra Malik, Rich feature hierarchies for accurate object detection and semantic segmentation\n\nSpatial pyramid pooling in deep convolutional networks for visual recognition [[Paper]] (http://arxiv.org/abs/1406.4729) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/SPPNet.md) [[Code]](https://github.com/ShaoqingRen/SPP_net)\n  * He K, Zhang X, Ren S, et al. Spatial pyramid pooling in deep convolutional networks for visual recognition[J]. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 2015, 37(9): 1904-1916.\n\nFast R-CNN [[Paper]] (http://arxiv.org/pdf/1504.08083) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/Fast-RCNN.md) [[Code]](https://github.com/rbgirshick/fast-rcnn)\n   * Ross Girshick, Fast R-CNN, arXiv:1504.08083.\n\nFaster R-CNN, Microsoft Research [[Paper]] (http://arxiv.org/pdf/1506.01497) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/Faster%20R-CNN.md) [[Code]](https://github.com/ShaoqingRen/faster_rcnn) [[Python Code]](https://github.com/rbgirshick/py-faster-rcnn)\n   * Shaoqing Ren, Kaiming He, Ross Girshick, Jian Sun, Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks, arXiv:1506.01497.\n\nEnd-to-end people detection in crowded scenes [[Paper]] (http://arxiv.org/abs/1506.04878)  [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/End-to-end-people-detection-in-crowded-scenes.md) [[Code]](https://github.com/Russell91/ReInspect)\n   * Russell Stewart, Mykhaylo Andriluka, End-to-end people detection in crowded scenes, arXiv:1506.04878.\n\nYou Only Look Once: Unified, Real-Time Object Detection [[Paper]] (http://arxiv.org/abs/1506.02640) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/YOLO.md) [[Code]](http://pjreddie.com/darknet/yolo/)\n   * Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi, You Only Look Once: Unified, Real-Time Object Detection, arXiv:1506.02640\n\nAdaptive Object Detection Using Adjacency and Zoom Prediction [[Paper]] (http://arxiv.org/abs/1512.07711) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/AZNet.md)\n   * Lu Y, Javidi T, Lazebnik S. Adaptive Object Detection Using Adjacency and Zoom Prediction[J]. arXiv:1512.07711, 2015.\n\nInside-Outside Net: Detecting Objects in Context with Skip Pooling and Recurrent Neural Networks [[Paper]](http://arxiv.org/abs/1512.04143) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/Inside-Outside-Net.md)\n   * Sean Bell, C. Lawrence Zitnick, Kavita Bala, Ross Girshick. arXiv:1512.04143, 2015.\n\nG-CNN: an Iterative Grid Based Object Detector [[Paper]](http://arxiv.org/abs/1512.07729v1)\n   * Mahyar Najibi, Mohammad Rastegari, Larry S. Davis. arXiv:1512.07729, 2015.\n\nSeq-NMS for Video Object Detection [[Paper]](http://arxiv.org/abs/1602.08465) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/Seq-NMS.md)\n   * Wei Han, Pooya Khorrami, Tom Le Paine, Prajit Ramachandran, Mohammad Babaeizadeh, Honghui Shi, Jianan Li, Shuicheng Yan, Thomas S. Huang. Seq-NMS for Video Object Detection. arXiv preprint arXiv:1602.08465, 2016\n\n### Image Caption\n\nExploring Nearest Neighbor Approaches for Image Captioning [[Paper]](http://arxiv.org/abs/1505.04467)\n   * Devlin J, Gupta S, Girshick R, et al. Exploring Nearest Neighbor Approaches for Image Captioning[J]. arXiv preprint arXiv:1505.04467, 2015.\n\nShow and Tell: A Neural Image Caption Generator [[Paper]](http://www.cv-foundation.org/openaccess/content_cvpr_2015/html/Vinyals_Show_and_Tell_2015_CVPR_paper.html) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/ShowAndTell.md)\n   * Vinyals, Oriol, et al. \"Show and tell: A neural image caption generator.\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015.\n\n### Image Generations:\nPixel Recurrent Neural Networks [[Paper]](https://arxiv.org/abs/1601.06759) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/pixel-rnn.md)\n * van den Oord A, Kalchbrenner N, Kavukcuoglu K. Pixel Recurrent Neural Networks[J]. arXiv preprint arXiv:1601.06759, 2016.\n\nVariational Autoencoder [[Paper]](http://arxiv.org/abs/1312.6114) [[Note]](http://sunshineatnoon.github.io/VAE/)\n   * Kingma D P, Welling M. Auto-encoding variational bayes[J]. arXiv preprint arXiv:1312.6114, 2013.\n\nDRAW: A recurrent neural network for image generation [[Paper]](http://arxiv.org/abs/1502.04623) [[Torch Code]](https://github.com/vivanov879/draw) [[Tensorflow Code]](https://github.com/ericjang/draw) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/DRAW.md)\n   * Gregor K, Danihelka I, Graves A, et al. DRAW: A recurrent neural network for image generation[J]. arXiv preprint arXiv:1502.04623, 2015.\n\nScribbler: Controlling Deep Image Synthesis with Sketch and Color [[Paper]](https://arxiv.org/pdf/1612.00835v2.pdf) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/scribble.md)\n   * Patsorn Sangkloy, Jingwan Lu, et al. Scribbler: Controlling Deep Image Synthesis with Sketch and Color. arXiv preprint arXiv:1612.00835, 2016.\n\nUnsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks [[Paper]](http://arxiv.org/abs/1511.06434)\n  * Radford A, Metz L, Chintala S. Unsupervised representation learning with deep convolutional generative adversarial networks[J]. arXiv preprint arXiv:1511.06434, 2015.\n\nImproved Techniques for Training GANs [[Paper]](http://arxiv.org/abs/1606.03498)\n  * Salimans T, Goodfellow I, Zaremba W, et al. Improved Techniques for Training GANs[J]. arXiv preprint arXiv:1606.03498, 2016.\n\nInfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets[[Paper]](https://arxiv.org/abs/1606.03657)\n  * Chen X, Duan Y, Houthooft R, et al. InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets[J]. arXiv preprint arXiv:1606.03657, 2016.\n\nImage-to-Image Translation with Conditional Adversarial Networks [[Paper]](https://arxiv.org/abs/1611.07004) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/pix2pix.md) [[Torch Code]](https://github.com/phillipi/pix2pix) [[Tensorflow Code]](https://github.com/yenchenlin/pix2pix-tensorflow)\n  * Isola P, Zhu J Y, Zhou T, et al. Image-to-Image Translation with Conditional Adversarial Networks[J]. arXiv preprint arXiv:1611.07004, 2016.\n\nLearning to Generate Images of Outdoor Scenes from Attributes and Semantic Layouts [[Paper]](https://arxiv.org/abs/1612.00215) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/AL_CGAN.md)\n  * Levent Karacan, Zeynep Akata, Aykut Erdem, Erkut Erdem. Learning to Generate Images of Outdoor Scenes from Attributes and Semantic Layouts [J]. arXiv preprint arXiv:1612.00215, 2016.\n\nLearning to Discover Cross-Domain Relations with Generative Adversarial Networks [[Paper]](https://arxiv.org/abs/1703.05192) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/DiscoGAN.md)\n* Kim, Taeksoo, et al. \"Learning to Discover Cross-Domain Relations with Generative Adversarial Networks.\" arXiv preprint arXiv:1703.05192 (2017).\n\nUnpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks  [[Paper]](https://arxiv.org/abs/1703.10593)  [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/cycleGAN.md)\n * Zhu J Y, Park T, Isola P, et al. Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks[J]. arXiv preprint arXiv:1703.10593, 2017.\n\nBEGAN: Boundary Equilibrium Generative Adversarial Networks [[Paper]](https://arxiv.org/abs/1703.10717) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/BEGAN.md)\n * Berthelot, David, Tom Schumm, and Luke Metz. \"BEGAN: Boundary Equilibrium Generative Adversarial Networks.\" arXiv preprint arXiv:1703.10717 (2017).\n\nStackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks [[Paper]](https://arxiv.org/abs/1612.03242) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/StackGAN.md) [[Tensorflow Code]](https://github.com/hanzhanggit/StackGAN)\n * Zhang, Han, et al. \"StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks.\" arXiv preprint arXiv:1612.03242 (2016).\n\nInvertible Conditional GANs for image editing [[Paper]](https://arxiv.org/abs/1611.06355) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/documents/IcGAN.md)\n* Perarnau G, van de Weijer J, Raducanu B, et al. Invertible Conditional GANs for image editing[J]. arXiv preprint arXiv:1611.06355, 2016.\n\nStacked Generative Adversarial Networks [[Paper]](https://arxiv.org/abs/1612.04357) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/documents/SGAN.md)\n* Huang X, Li Y, Poursaeed O, et al. Stacked generative adversarial networks[J]. arXiv preprint arXiv:1612.04357, 2016.\n\nRotating Your Face Using Multi-task Deep Neural Network [[Paper]](http://www.cv-foundation.org/openaccess/content_cvpr_2015/html/Yim_Rotating_Your_Face_2015_CVPR_paper.html) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/documents/Yim16.md)\n* Yim J, Jung H, Yoo B I, et al. Rotating your face using multi-task deep neural network[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015: 676-684.\n\n### Image and Language\nLearning Deep Representations of Fine-Grained Visual Descriptions [[Paper]](https://arxiv.org/abs/1605.05395) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/DS-JSE.pdf)\n  * Reed, Scott, et al. \"Learning deep representations of fine-grained visual descriptions.\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016.\n\n### Activation Maximization\nSynthesizing the preferred inputs for neurons in neural networks via deep generator networks [[Paper]](https://arxiv.org/abs/1605.09304) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/DGN_AM.md)\n  * Nguyen A, Dosovitskiy A, Yosinski J, et al. Synthesizing the preferred inputs for neurons in neural networks via deep generator networks[J]. arXiv preprint arXiv:1605.09304, 2016.\n\n### Style Transfer\nA neural algorithm of artistic style [[Paper]](http://arxiv.org/abs/1508.06576) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/A%20Neural%20Algorithm%20of%20Artistic%20Style.md)\n  * Gatys L A, Ecker A S, Bethge M. A neural algorithm of artistic style[J]. arXiv preprint arXiv:1508.06576, 2015.\n\nPerceptual losses for real-time style transfer and super-resolution [[Paper]](https://arxiv.org/abs/1603.08155) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/Perceptual%20Losses%20Neural%20Style.md)\n  * Johnson J, Alahi A, Fei-Fei L. Perceptual losses for real-time style transfer and super-resolution[J]. arXiv preprint arXiv:1603.08155, 2016.\n\nPreserving Color in Neural Artistic Style Transfer [[Paper]](https://arxiv.org/abs/1606.05897) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/preserveNT.md) [[Pytorch Code]](https://github.com/sunshineatnoon/Paper-Implementations/tree/master/NeuralSytleTransfer#neural-style-transfer-with-color-preservation)\n  * Gatys, Leon A., et al. \"Preserving color in neural artistic style transfer.\" arXiv preprint arXiv:1606.05897 (2016).\n\nA Learned Representation For Artistic Style [[Paper]](https://arxiv.org/pdf/1610.07629.pdf) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/multi-neural.md) [[Tensorflow Code]](https://github.com/tensorflow/magenta/tree/master/magenta/models/image_stylization) [[Lasagne Code]](https://github.com/joelmoniz/gogh-figure)\n  * Dumoulin, Vincent, Jonathon Shlens, and Manjunath Kudlur. \"A learned representation for artistic style.\" (2017).\n\nDemystifying Neural Style Transfer [[Paper]](https://arxiv.org/abs/1701.01036)\n  * Li, Yanghao, et al. \"Demystifying Neural Style Transfer.\" arXiv preprint arXiv:1701.01036 (2017).\n\nArbitrary Style Transfer in Real-time with Adaptive Instance Normalization [[Paper]](https://arxiv.org/abs/1703.06868)\n  * Huang, Xun, and Serge Belongie. \"Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization.\" arXiv preprint arXiv:1703.06868 (2017).\n\nFast Patch-based Style Transfer of Arbitrary Style [[Paper]](https://arxiv.org/pdf/1612.04337v1.pdf)\n  * Chen, Tian Qi, and Mark Schmidt. \"Fast Patch-based Style Transfer of Arbitrary Style.\" arXiv preprint arXiv:1612.04337 (2016).\n\n### Low-level vision\nTexture Enhancement via High-Resolution Style Transfer for Single-Image Super-Resolution [[Paper]](https://arxiv.org/abs/1612.00085) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/style_SR.md)\n  * Il Jun Ahn, Woo Hyun Nam. Texture Enhancement via High-Resolution Style Transfer for Single-Image Super-Resolution [J]. arXiv preprint arXiv:1612.00085, 2016.\n\nDeep Joint Image Filtering [[Paper]](https://pdfs.semanticscholar.org/9bc0/d4609fadc139480096ca95772bd82303a985.pdf) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/documents/ECCVJointFilter.md)\n  * Li Y, Huang J B, Ahuja N, et al. Deep joint image filtering[C]//European Conference on Computer Vision. Springer International Publishing, 2016: 154-169.\n  \n### Image Segmentation  \nFully convolutional networks for semantic segmentation [[Paper]](https://arxiv.org/abs/1411.4038) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/FCN.md)\n   * Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015: 3431-3440.\n\n### Video Editing\nDeep Video Color Propagation [[Paper]](https://arxiv.org/abs/1808.03232) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/documents/deepcolorprop/deepvideocolorprop.md)\n  * Meyer S, Cornillère V, Djelouah A, et al. Deep Video Color Propagation. BMVC 2018.\n\n### Deep Matching\nAnchorNet: A Weakly Supervised Network to Learn Geometry-sensitive Features For Semantic Matching [[Paper]](http://openaccess.thecvf.com/content_cvpr_2017/papers/Novotny_AnchorNet_A_Weakly_CVPR_2017_paper.pdf) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/documents/anchorNet/anchorNet.md)\n  * Novotný D, Larlus D, Vedaldi A. AnchorNet: A Weakly Supervised Network to Learn Geometry-Sensitive Features for Semantic Matching, CVPR. 2017\n\n### Open Courses\n* CS231n: Convolutional Neural Networks for Visual Recognition [[Course Page]](http://vision.stanford.edu/teaching/cs231n/index.html)\n* CS224d: Deep Learning for Natural Language Processing [[Course Page]](http://cs224d.stanford.edu/index.html)\n\n### Online Books\n* [Deep Learning](http://www.deeplearningbook.org) by Ian Goodfellow, Yoshua Bengio and Aaron Courville\n\n### Mathmatics\n* Introduction to Probability Models, Sheldon M. Ross\n\n### Misc\nk-means++: The advantages of careful seeding [[Paper]](http://theory.stanford.edu/~sergei/papers/kMeansPP-soda.pdf) [[Note]](https://github.com/sunshineatnoon/Paper-Collection/blob/master/k-means++.md)\n   * Arthur D, Vassilvitskii S. k-means++: The advantages of careful seeding[C]//Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms. Society for Industrial and Applied Mathematics, 2007: 1027-1035.\n","funding_links":[],"categories":["论文集合"],"sub_categories":["其他"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsunshineatnoon%2FPaper-Collection","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fsunshineatnoon%2FPaper-Collection","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsunshineatnoon%2FPaper-Collection/lists"}