{"id":19529833,"url":"https://github.com/internlm/internlm-techreport","last_synced_at":"2026-01-27T19:04:49.560Z","repository":{"id":172130039,"uuid":"648849579","full_name":"InternLM/InternLM-techreport","owner":"InternLM","description":null,"archived":false,"fork":false,"pushed_at":"2023-06-07T04:52:53.000Z","size":7651,"stargazers_count":904,"open_issues_count":6,"forks_count":25,"subscribers_count":23,"default_branch":"main","last_synced_at":"2025-05-09T21:34:58.299Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":null,"has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/InternLM.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null}},"created_at":"2023-06-03T01:46:45.000Z","updated_at":"2025-04-08T07:46:56.000Z","dependencies_parsed_at":"2024-01-18T04:18:37.466Z","dependency_job_id":"0dbe21a8-51be-4b72-b96d-babb8bbcbb5c","html_url":"https://github.com/InternLM/InternLM-techreport","commit_stats":null,"previous_names":["internlm/internlm-techreport"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/InternLM/InternLM-techreport","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/InternLM%2FInternLM-techreport","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/InternLM%2FInternLM-techreport/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/InternLM%2FInternLM-techreport/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/InternLM%2FInternLM-techreport/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/InternLM","download_url":"https://codeload.github.com/InternLM/InternLM-techreport/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/InternLM%2FInternLM-techreport/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28819062,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-27T18:44:20.126Z","status":"ssl_error","status_checked_at":"2026-01-27T18:44:09.161Z","response_time":168,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.6:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-11T01:27:49.494Z","updated_at":"2026-01-27T19:04:49.510Z","avatar_url":"https://github.com/InternLM.png","language":null,"readme":"# InternLM\n\n[InternLM](https://internlm.org) is a multilingual large language model jointly developed by Shanghai AI Lab and SenseTime (with equal contribution), in collaboration with the Chinese University of Hong Kong, Fudan University, and Shanghai Jiaotong University. \n\n**Technical report**: [[PDF]](InternLM.pdf)\n\n**Note:** Please right click the link above to directly download the PDF file.\n\n---\n\n## Abstract\n\nWe present InternLM, a multilingual foundational language model with 104B parameters. InternLM is pre-trained on a large corpora with 1.6T tokens with a multi-phase progressive process, and then fine-tuned to align with human preferences. We also developed a training system called Uniscale-LLM for efficient large language model training. The evaluation on a number of benchmarks shows that InternLM achieves state-of-the-art performance in multiple aspects, including knowledge understanding, reading comprehension, mathematics, and coding. With such well-rounded capabilities, InternLM achieves outstanding performances on comprehensive exams, including MMLU, AGIEval, C-Eval and GAOKAO-Bench, without resorting to external tools. On these benchmarks, InternLM not only significantly outperforms open-source models, but also obtains superior performance compared to ChatGPT. Also, InternLM demonstrates excellent capability of understanding Chinese language and Chinese culture, which makes it a suitable foundation model to support Chinese-oriented language applications. This manuscript gives a detailed study of our results, with benchmarks and examples across a diverse set of knowledge domains and tasks.\n\n## Main Results\n\nAs latest large language models begin to exhibit human-level intelligence, \nexams designed for humans, such as China's college entrance examination and US SAT and GRE, \nare considered as important means to evaluate language models. \nNote that in its technical report on GPT-4, OpenAI tested GPT-4\nthrough exams across multiple areas and used the exam scores as the key results. \n\nWe tested InternLM in comparison with others on four comprehensive exam benchmarks,\nas below:\n\n- **MMLU**: \nA multi-task benchmark constructed based on various US exams, \nwhich covers elementary mathematics, physics, chemistry, computer science, American history, law, economics, diplomacy, etc.\n\n- **AGIEval**:\nA benchmark developed by Microsoft Research to evaluate the ability of language models through human-oriented exams, which comprises 19 task sets derived from various exams in China and the United States, e.g., the college entrance exams and lawyer qualification exams in China, and SAT, LSAT, GRE and GMAT in the United States. \nAmong the 19 task sets, 9 sets are based on the Chinese college entrance exam (Gaokao), which we single out as an important collection named **AGIEval (GK)**.\n\n- **C-Eval**:\nA comprehensive benchmark devised to evaluate Chinese language models, which\ncontains nearly 14,000 questions in 52 subjects, covering mathematics, physics, \nchemistry, biology, history, politics, computer and other disciplines, as well as \nprofessional exams for civil servants, certified accountants, lawyers, and doctors.\n\n- **GAOKAO-Bench**:\nA comprehensice benchmark based on the Chinese college entrance exams, which \ninclude all subjects of the college entrance exam. It provide different types \nof questions, including multiple-choices, blank filling, and QA.\nFor conciseness, we call this benchmark simply as **Gaokao**.\n\n![Exam benchmarks](https://internlm.oss-cn-shanghai.aliyuncs.com/exam.png)\n\n### Results on MMLU\n\n![MMLU](https://internlm.oss-cn-shanghai.aliyuncs.com/MMLU.png)\n\n### Results on AGIEval\n\n![AGIEval](https://internlm.oss-cn-shanghai.aliyuncs.com/AGIEval.png)\n\n### Results on C-Eval\n\nC-Eval has a [live leaderboard](https://cevalbenchmark.com/static/leaderboard.html). Below is a screenshot that shows all\nthe results (as of 2023-06-01).\n\n![C-Eval leaderboard](https://internlm.oss-cn-shanghai.aliyuncs.com/ceval-leaderboard.png)\n\n![C-Eval](https://internlm.oss-cn-shanghai.aliyuncs.com/C-Eval.png)\n\n### Results on GAOKAO-Benchmark\n\n![GAOKAO-Benchmark](https://internlm.oss-cn-shanghai.aliyuncs.com/gaokao.png)\n\n## Benchmarks in Specific Aspects\n\nWe also tested InternLM in comparison with others in multiple aspects:\n\n-  Knowledge QA: TriviaQA and NaturalQuestions.\n-  Reading Comprehension: RACE\n-  Chinese Understanding: CLUE and FewCLUE\n-  Mathematics: GSM8k and MATH\n-  Coding: HumanEval and MBP\n\nPlease refer to our [technical report](InternLM.pdf) for detailed results.\n\nWe are working on more tests, and will share new results as our work proceeds.\n\n## Citation\n\nYou can cite this technical report like this:\n\n```BibTeX\n@misc{2023internlm,\n    title={InternLM: A Multilingual Language Model with Progressively Enhanced Capabilities},\n    author={InternLM Team},\n    howpublished = {\\url{https://github.com/InternLM/InternLM-techreport}},\n    year={2023}\n}\n```\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Finternlm%2Finternlm-techreport","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Finternlm%2Finternlm-techreport","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Finternlm%2Finternlm-techreport/lists"}