{"id":19243209,"url":"https://github.com/acecoooool/mlds-note","last_synced_at":"2025-09-09T01:38:53.972Z","repository":{"id":201607396,"uuid":"144388016","full_name":"AceCoooool/MLDS-Note","owner":"AceCoooool","description":":books: This is note for Machine Learning and having it deep and structrured (Hung-yi Lee)","archived":false,"fork":false,"pushed_at":"2018-09-04T15:33:19.000Z","size":41692,"stargazers_count":13,"open_issues_count":0,"forks_count":7,"subscribers_count":2,"default_branch":"master","last_synced_at":"2025-04-21T09:52:05.288Z","etag":null,"topics":["course","deep-learning","machine-learning","python","pytorch","tutorial"],"latest_commit_sha":null,"homepage":"","language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/AceCoooool.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2018-08-11T13:35:30.000Z","updated_at":"2024-01-24T03:54:04.000Z","dependencies_parsed_at":null,"dependency_job_id":"e2ad0414-80b2-4c1d-a413-430cbf30ee01","html_url":"https://github.com/AceCoooool/MLDS-Note","commit_stats":null,"previous_names":["acecoooool/mlds-note"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/AceCoooool/MLDS-Note","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AceCoooool%2FMLDS-Note","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AceCoooool%2FMLDS-Note/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AceCoooool%2FMLDS-Note/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AceCoooool%2FMLDS-Note/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/AceCoooool","download_url":"https://codeload.github.com/AceCoooool/MLDS-Note/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/AceCoooool%2FMLDS-Note/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":274232029,"owners_count":25245856,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-09-08T02:00:09.813Z","response_time":121,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["course","deep-learning","machine-learning","python","pytorch","tutorial"],"created_at":"2024-11-09T17:17:00.193Z","updated_at":"2025-09-09T01:38:53.943Z","avatar_url":"https://github.com/AceCoooool.png","language":"Jupyter Notebook","readme":"# MLDS-Note\n关于[Machine Learning and having it deep and structured (2018,Spring)](http://speech.ee.ntu.edu.tw/~tlkagk/courses_MLDS18.html)课程的学习笔记\n\n| hw1  | hw3                                                          |\n| :--: | :----------------------------------------------------------: |\n| \u003cdiv align=\"center\"\u003e \u003cimg src=\"./png/hw1.png\" width=\"350\"/\u003e \u003c/div\u003e\u003cbr\u003e  | \u003cdiv align=\"center\"\u003e \u003cimg src=\"./png/hw3.gif\" width=\"350\"/\u003e \u003c/div\u003e\u003cbr\u003e |\n| hw2 | hw4 |\n|  |  |\n\n## 目录\n\n### [chapter 1: Why Deep Structure](ch1/ch1.md)\n\n- [1-1 Can shallow network fit any function](ch1/ch1_1.md)\n- [1-2 Potential of Deep](ch1/ch1_2.md)\n- [1-3 Is Deep better than Shallow](ch1/ch1_3.md)\n\n### [chapter 2: Optimization](ch2/ch2.md)\n\n- [2-1 When Gradient is Zero](ch2/ch2_1.md)\n- [2-2 Deep Linear Network](ch2/ch2_2.md)\n- [2-3 Does Deep Network have Local Minima](ch2/ch2_3.md)\n- [2-4 Geometry of Loss Surfaces (Conjecture)](ch2/ch2_4.md)\n- [2-5 Geometry of Loss Surfaces (Empirical)](ch2/ch2_5.md)\n\n### [chapter 3: Generalization](ch3/ch3.md)\n\n- [3-1 Capability of Generalization](ch3/ch3_1.md)\n- [3-2 Indicator of Generalization](ch3/ch3_2.md)\n\n###  [chapter 4: Computational Graph](ch4/ch4.md)（coming soon）\n\n### [chapter 5: Special Network Structure](ch5/ch5.md)\n\n- [5-1 RNN with Gated Mechanism](ch5/ch5_1.md)\n- [5-2 Sequence Generation](ch5/ch5_2.md)\n- [5-3 Conditional Sequence Generation](ch5/ch5_3.md)\n- [5-4 Tips for Generation](ch5/ch5_4.md)\n- [5-5 Pointer Network](ch5/ch5_5.md)\n- [5-6 Recursive Structure](ch5/ch5_6.md)\n- [5-7 Attention-based Model](ch5/ch5_7.md)\n\n### [chapter 6: Special Training Technology](ch6/ch6.md)\n\n- [6-1 Tips for Training Deep Network](ch6/ch6_1.md)\n- [6-2 Automatically Determining Hyperparameters](ch6/ch6_2.md)\n\n### [chapter 7: Generative Adversarial Network (GAN)](ch7/ch7.md)\n\n- [7-1 Introduction of Generative Adversarial Network (GAN)](ch7/ch7_1.md)\n- [7-2 Conditional Generation by GAN](ch7/ch7_2.md)\n- [7-3 Unsupervised Conditional Generation](ch7/ch7_3.md)\n- [7-4 Theory behind GAN](ch7/ch7_4.md)\n- [7-5 fGAN：General Framework of GAN](ch7/ch7_5.md)\n- [7-6 Tips for Improving GAN](ch7/ch7_6.md)\n- [7-7 Feature Extraction](ch7/ch7_7.md)\n- [7-8 Intelligent Photo Editing](ch7/ch7_8.md)\n- [7-9 Application to Sequence Generation](ch7/ch7_9.md)（coming soon）\n- [7-10 Evaluation](ch7/ch7_10.md)\n\n### [chapter 8: Deep Reinforcement Learning](ch8/ch8.md)\n\n- [8-1 Introduction of Reinforcement Learning](ch8/ch8_1.md)\n- [8-2 Policy-based Approach (Learning an Actor)](ch8/ch8_2.md)\n- [8-3 Proximal Policy Optimization (PPO)](ch8/ch8_3.md)\n- [8-4 Q-Learning (1)](ch8/ch8_4.md)\n- [8-5 Q-Learning (2)](ch8/ch8_5.md)\n- [8-6 Actor-Critic](ch8/ch8_6.md)\n- [8-7 Sparse Reward](ch8/ch8_7.md)\n- [8-8 Imitation Learning](ch8/ch8_8.md)\n\n### [Appendix: homework](homework/README.md)\n\n- [hw1](homework/README.md)：finish\n- [hw2](homework/README.md)：TODO（not recently）\n- [hw3](homework/README.md)：coming soon\n- [hw4](homework/README.md)：coming soon\n\n## 说明\n\n1. 上述内容均来自于[MLDS](http://speech.ee.ntu.edu.tw/~tlkagk/courses_MLDS18.html)课程，欢迎交流与讨论（可能里面会有我个人理解错误的地方，欢迎指出）\n2. 可以clone到本地，再用其他markdown阅读工具（个人采用typora编辑）\n3. 请勿用于其他商业用途\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Facecoooool%2Fmlds-note","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Facecoooool%2Fmlds-note","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Facecoooool%2Fmlds-note/lists"}