Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
awesome-ChatGPT
ChatGPT相关资源汇总
https://github.com/zhoucz97/awesome-ChatGPT
Last synced: 3 days ago
JSON representation
-
ChatGPT技术文章
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- 张俊林:通向AGI之路:大型语言模型(LLM)技术精要 - 知乎 (zhihu.com)
- 符尧大佬文章-拆解追溯 GPT-3.5 各项能力的起源
- 符尧直播:预训练,指令微调,对齐,专业化:论大预言模型能力的来源
- 对话大模型中的事实错误:ChatGPT 的缺陷 (qq.com)
- ChatGPT 背后的“功臣”——RLHF 技术详解 (qq.com)
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- 赛尔笔记 | 浅析ChatGPT的原理及应用 (qq.com)
- 赛尔笔记 | ChatGPT第二弹:PPO算法 (qq.com)
- In-Context Learning玩法大全 (qq.com)
- 为什么所有公开的对 GPT-3 的复现都失败了?复现和使用GPT-3/ChatGPT,你所应该知道的 (qq.com)
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- ChatGPT 背后的“功臣”——RLHF 技术详解 (qq.com)
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- ChatGPT 背后的“功臣”——RLHF 技术详解 (qq.com)
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- 解读 ChatGPT 背后的技术重点:RLHF、IFT、CoT、红蓝对抗 - 知乎 (zhihu.com)
- 【强化学习 229】ChatGPT/InstructGPT - 知乎 (zhihu.com)
- ChatGPT/InstructGPT详解 - 知乎 (zhihu.com)
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- ChatGPT 背后的“功臣”——RLHF 技术详解 (qq.com)
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- In-Context Learning玩法大全 (qq.com)
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- ChatGPT是怎样被训练出来的?_哔哩哔哩_bilibili
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- datawhalechina/easy-rl: 强化学习中文教程(蘑菇书)
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- 大规模语言模型训练必备数据集-The Pile:涵盖22类、800GB的多样性文本数据集概述
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- 从零实现ChatGPT——RLHF技术笔记 - 知乎 (zhihu.com)
- Illustrating Reinforcement Learning from Human Feedback (RLHF) (huggingface.co)
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
- Proximal Policy Optimization (PPO) Explained | by Wouter van Heeswijk, PhD | Towards Data Science
-
动手实现ChatGPT/RLHF
-
调用外部工具
- allenai/RL4LMs: A modular RL library to fine-tune language models to human preferences (github.com)
- CarperAI/trlx: A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF) (github.com)
- karpathy/nanoGPT
- tatsu-lab/stanford_alpaca: Code and documentation to train Stanford's Alpaca models, and generate the data. (github.com)
- yizhongw/self-instruct: Aligning pretrained language models with instruction data generated by themselves. (github.com)
- GPT-3 + RL 全流程训练开源整理 - 知乎 (zhihu.com)
- transformers_tasks/readme.md at main · HarderThenHarder/transformers_tasks (github.com)
- Stanford CRFM官方blog
- Stanford Alpaca (羊驼):ChatGPT 学术版开源实现 - 知乎 (zhihu.com)
- Alpaca-Lora (羊驼-Lora): 轻量级 ChatGPT 的开源实现(对标 Standford Alpaca) - 知乎 (zhihu.com)
-
-
ChatGPT应用
-
调用外部工具
- wong2/chatgpt-google-extension
- AutumnWhj/ChatGPT-wechat-bot
- PlexPt/awesome-chatgpt-prompts-zh
- 使用 Prompts 和 Chains 让 ChatGPT 成为神奇的生产力工具
- acheong08/ChatGPT: Reverse engineered ChatGPT API (github.com)
- Reinventing search with a new AI-powered Microsoft Bing and Edge, your copilot for the web - The Official Microsoft Blog
- Building the New Bing | Search Quality Insights
- acheong08's list / Awesome ChatGPT (github.com)
-
-
ChatGPT整体介绍
- ChatGPT (可能)是怎麼煉成的 - GPT 社會化的過程 - YouTube
- 【生成式AI】ChatGPT 原理剖析 (1/3) — 對 ChatGPT 的常見誤解 - YouTube
- 【生成式AI】ChatGPT 原理剖析 (2/3) — 預訓練 (Pre-train) - YouTube
- 【生成式AI】ChatGPT 原理剖析 (3/3) — ChatGPT 所帶來的研究問題 - YouTube
- 【生成式AI】用 ChatGPT 和 Midjourney 來玩文字冒險遊戲 - YouTube
- InstructGPT-ChatGPT前身,从人类回馈中学习_哔哩哔哩_bilibili
- ChatGPT发展历程、原理、技术架构详解和产业未来 (收录于先进AI技术深度解读) - 知乎 (zhihu.com)
- 车万翔:ChatGPT时代,NLPer 的危与机
- ChatGPT: Optimizing Language Models for Dialogue (openai.com)
- ChatGPT团队背景(共87人)
-
ChatGPT讨论
- 国内有类似ChatGPT能力的模型吗? - 知乎 (zhihu.com)
- ChatGPT下的知识图谱审视:一次关于必然影响、未来方向的讨论实录与总结 (qq.com)
- ChatGPT 有多高的技术壁垒?国内外除了 OpenAI 还有谁可以做到类似程度? - 知乎 (zhihu.com)
- 为什么chatgpt的上下文连续对话能力得到了大幅度提升? - 知乎 (zhihu.com)
- 为什么Yann lecun(杨立昆)对chatGPT持否定态度? - 知乎 (zhihu.com)
- OpenAI 何以掀翻 Google 布局多年的AI大棋? (qq.com)
- 李rumor:追赶ChatGPT的难点与平替
- 如何评价 ChatGPT ?会取代搜索引擎吗? - 知乎 (zhihu.com)
- 如何评价 OpenAI 的超级对话模型 ChatGPT ? - 知乎 (zhihu.com)
- ChatGPT 印证了模型大一统的可行性,这在未来五年会对 NLP 从业者带来怎样的冲击? - 知乎 (zhihu.com)
- OpenAI 何以掀翻 Google 布局多年的AI大棋? (qq.com)
- OpenAI 何以掀翻 Google 布局多年的AI大棋? (qq.com)
- 阻碍国内团队研究 ChatGPT 这样产品的障碍有哪些,技术,钱,还是领导力? - 知乎 (zhihu.com)
- ChatGPT 结合工业机器人,将锁死发展中国家的崛起? - 知乎 (zhihu.com)
- 青源Talk第33期丨文本生成中的知识和控制_哔哩哔哩_bilibili
- ChatGPT下的知识图谱审视:一次关于必然影响、未来方向的讨论实录与总结 (qq.com)
-
ChatGPT相关Paper
-
Prompt
- [2212.10560\
- Chain-of-Thought Prompting Elicits Reasoning in Large Language Models (arxiv.org)
- Large Language Models are Zero-Shot Reasoners (arxiv.org)
- Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? - ACL Anthology
- Towards Understanding Chain-of-Thought Prompting: An Empirical Study of What Matters (arxiv.org)
- A Survey on In-context Learning (arxiv.org)
- Scaling Laws for Neural Language Models (arxiv.org)
-
GPT系列
-
RLHF
-
调用外部工具
-
-
如何体验ChatGPT
-
ChatGPT产业分析
Programming Languages
Categories
Keywords
chatgpt
4
reinforcement-learning
3
language-model
2
best-practices
1
bard
1
gpt
1
chatgpt4
1
chatgpt3
1
chat-gpt
1
firefox-addon
1
chrome-extension
1
browser-extension
1
instruction-tuning
1
general-purpose-model
1
instruction-following
1
deep-learning
1
pytorch
1
machine-learning
1
text-generation
1
table-to-text
1
summarization
1
nlp
1
natural-language-processing
1
machine-translation
1
language-modeling
1
revchatgpt
1
pypi-package
1
library
1
gptchat
1
gpt-35-turbo
1
cli
1
td3
1
sarsa
1
q-learning
1
ppo
1
policy-gradient
1
imitation-learning
1
easy-rl
1
dueling-dqn
1
dqn
1
double-dqn
1
deep-reinforcement-learning
1
ddpg
1
a3c
1
prompts-cn
1
prompts
1
productivity-tools
1
openai-chatgpt
1
large-language-models
1
google-bard
1