{"id":23237060,"url":"https://github.com/THUDM/P-tuning","last_synced_at":"2025-08-19T23:31:26.650Z","repository":{"id":37632702,"uuid":"348953105","full_name":"THUDM/P-tuning","owner":"THUDM","description":"A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.","archived":false,"fork":false,"pushed_at":"2022-10-06T12:36:12.000Z","size":6266,"stargazers_count":926,"open_issues_count":16,"forks_count":111,"subscribers_count":23,"default_branch":"main","last_synced_at":"2024-12-18T12:02:34.442Z","etag":null,"topics":["few-shot-learning","natural-language-processing","p-tuning","parameter-efficient-learning","pre-trained-language-models","prompt-tuning"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/THUDM.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE.md","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2021-03-18T05:33:07.000Z","updated_at":"2024-12-14T04:37:57.000Z","dependencies_parsed_at":"2022-07-14T08:17:39.136Z","dependency_job_id":null,"html_url":"https://github.com/THUDM/P-tuning","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/THUDM%2FP-tuning","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/THUDM%2FP-tuning/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/THUDM%2FP-tuning/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/THUDM%2FP-tuning/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/THUDM","download_url":"https://codeload.github.com/THUDM/P-tuning/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":230374271,"owners_count":18216044,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["few-shot-learning","natural-language-processing","p-tuning","parameter-efficient-learning","pre-trained-language-models","prompt-tuning"],"created_at":"2024-12-19T04:13:25.737Z","updated_at":"2024-12-19T04:13:26.376Z","avatar_url":"https://github.com/THUDM.png","language":"Python","readme":"# P-tuning\n## ❗ News \n\n🌟 [2022-10-06] Thrilled to present [GLM-130B: An Open Bilingual Pre-trained Model](https://arxiv.org/abs/2210.02414). It is an open-sourced LLM outperforming GPT-3 175B over various benchmarks. Get model weights and do inference and P-Tuning with only **4 * RTX 3090 or 8 * RTX 2080 Ti** [FOR FREE](https://github.com/THUDM/GLM-130B)!\n\n🌟 [2022-07-14] [Parameter-Efficient Prompt Tuning Makes Generalized and Calibrated Neural Text Retrievers](https://arxiv.org/pdf/2207.07087.pdf) is out! Check our [code](https://github.com/THUDM/P-tuning-v2/tree/main/PT-Retrieval).\n\n🌟 [2021-10-15] [P-tuning v2](https://arxiv.org/abs/2110.07602) is out! Check our [Github repo](https://github.com/THUDM/P-tuning-v2).\n\nA novel method to tune language models. Codes and datasets for paper [``GPT understands, too''](https://arxiv.org/abs/2103.10385).\n\n[Xiao Liu*](https://scholar.google.com.hk/citations?user=VKI8EhUAAAAJ\u0026hl=zh-CN), [Yanan Zheng*](zheng-yanan.github.io), [Zhengxiao Du](https://scholar.google.com/citations?user=A8x07E0AAAAJ\u0026hl=en), [Ming Ding](https://scholar.google.com/citations?user=Va50YzkAAAAJ\u0026hl=en), [Yujie Qian](https://scholar.google.com/citations?user=93a-9kkAAAAJ\u0026hl=en), [Zhilin Yang](https://scholar.google.com.hk/citations?user=7qXxyJkAAAAJ\u0026hl=en), [Jie Tang](http://keg.cs.tsinghua.edu.cn/jietang/)\n\n![](img/PT.png)\n\nYou may be also interested in our another work GLM: [All NLP Tasks Are Generation Tasks: A General Pretraining Framework](https://github.com/THUDM/GLM)\n\n## How to use our code\nWe have released the code and datasets for LAMA and few-shot SuperGLUE (32-dev) experiments. Please check **README.md** and **requirement.txt** in the corresponding subdirectories for details.\n\nThe [LAMA](https://cloud.tsinghua.edu.cn/f/21b9dcf05cc44adfad25/?dl=1) and [FewGLUE_32dev](https://github.com/THUDM/P-tuning/tree/main/FewGLUE_32dev) datasets are available. The LAMA dataset should be placed in ./data directory, and the SuperGLUE dataset should be placed in the ./ (project root) directory.\n\n## Citation\n\nIf you find our work useful, please cite the following paper:\n```\n    @article{liu2021gpt,\n    title={GPT Understands, Too},\n    author={Liu, Xiao and Zheng, Yanan and Du, Zhengxiao and Ding, Ming and Qian, Yujie and Yang, Zhilin and Tang, Jie},\n    journal={arXiv:2103.10385},\n    year={2021}\n    }\n```\n","funding_links":[],"categories":["Python","A01_文本生成_文本对话","Papers","\u003cspan id=\"head1\"\u003e *Keywords* \u003c/span\u003e"],"sub_categories":["大语言对话模型及数据","Language Models","[Prompt PEFT](#content)"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FTHUDM%2FP-tuning","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FTHUDM%2FP-tuning","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FTHUDM%2FP-tuning/lists"}