{"id":21773515,"url":"https://github.com/waltonfuture/Diff-eRank","last_synced_at":"2025-07-19T10:31:00.453Z","repository":{"id":220060202,"uuid":"749820768","full_name":"waltonfuture/Diff-eRank","owner":"waltonfuture","description":"Code for https://arxiv.org/abs/2401.17139 (NeurIPS 2024)","archived":false,"fork":false,"pushed_at":"2024-11-15T11:46:37.000Z","size":38,"stargazers_count":23,"open_issues_count":0,"forks_count":2,"subscribers_count":1,"default_branch":"master","last_synced_at":"2024-11-15T12:31:54.292Z","etag":null,"topics":["evaluation-metrics","llm","llm-inference","machine-learning","mllm","neurips-2024"],"latest_commit_sha":null,"homepage":"https://arxiv.org/abs/2401.17139","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/waltonfuture.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-01-29T13:16:52.000Z","updated_at":"2024-11-15T11:46:41.000Z","dependencies_parsed_at":"2024-11-04T09:38:14.510Z","dependency_job_id":null,"html_url":"https://github.com/waltonfuture/Diff-eRank","commit_stats":null,"previous_names":["waltonfuture/matrix-entropy","waltonfuture/diff-erank"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/waltonfuture%2FDiff-eRank","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/waltonfuture%2FDiff-eRank/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/waltonfuture%2FDiff-eRank/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/waltonfuture%2FDiff-eRank/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/waltonfuture","download_url":"https://codeload.github.com/waltonfuture/Diff-eRank/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":226584451,"owners_count":17655036,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["evaluation-metrics","llm","llm-inference","machine-learning","mllm","neurips-2024"],"created_at":"2024-11-26T17:01:31.685Z","updated_at":"2025-07-19T10:31:00.424Z","avatar_url":"https://github.com/waltonfuture.png","language":"Python","readme":"# Diff-eRank: A Novel Rank-Based Metric for Evaluating Large Language Models (NeurIPS 2024)\n[Lai Wei](https://waltonfuture.github.io/) *, Zhiquan Tan *, Chenghai Li, [Jindong Wang](https://jd92.wang/), [Weiran Huang](https://www.weiranhuang.com/) (*Equal Contribution).\n\n**Shanghai Jiao Tong University \u0026 Tsinghua University \u0026 William and Mary**\n\n\u003ca href='https://arxiv.org/abs/2401.17139'\u003e\u003cimg src='https://img.shields.io/badge/Paper-Arxiv-red'\u003e\u003c/a\u003e \u003ca href='https://zhuanlan.zhihu.com/p/4920383227'\u003e\u003cimg src='https://img.shields.io/badge/Project-Link-Green'\u003e\u003c/a\u003e\n\n\n## Introduction\nWe introduce a rank-based metric called Diff-eRank, which is rooted in information theory and geometry principles. Diff-eRank evaluates LLMs by examining their hidden representations to quantify how LLMs discard redundant information after training.\nSpecifically, we demonstrate its applicability in both single-modal (language) and multi-modal settings. For language models, our findings reveal that the Diff-eRank increases when the model scales up, which also demonstrates a consistent relationship with traditional metrics like loss and accuracy.\nFor multi-modal models, we also propose an evaluation method based on rank for assessing alignment quality and we find that modern multi-modal large language models exhibit good alignment performance. \n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://notes.sjtu.edu.cn/uploads/upload_e801b753d216de544fb4442c16d7d6de.png\" alt=\"Image description\" width=\"40%\"\u003e\n\u003c/p\u003e\n\n## Calculation of Diff-eRank for LLMs\n\n### Setup\n```bash\npip install transformers torch datasets\n```\n\n### Calculation\n\n```bash\nfrom transformers import AutoTokenizer, AutoModel, AutoConfig\nimport torch\nimport math\n\n# R input N*d\ndef normalize(R):\n    with torch.no_grad():\n        mean = R.mean(dim=0)\n        R = R - mean\n        norms = torch.norm(R, p=2, dim=1, keepdim=True)\n        R = R/norms\n    return R\n\ndef cal_cov(R):\n    with torch.no_grad():\n        Z = torch.nn.functional.normalize(R, dim=1)\n        A = torch.matmul(Z.T, Z)/Z.shape[0]\n    return A\n\ndef cal_erank(A):\n    with torch.no_grad():\n        eig_val = torch.svd(A / torch.trace(A))[1] \n        entropy = - (eig_val * torch.log(eig_val)).nansum().item()\n        erank = math.exp(entropy)\n    return erank\n\ndef compute(R):\n    return cal_erank(cal_cov(normalize(R)))\n\nmodel_path = \"facebook/opt-1.3b\" # for example\ntokenizer = AutoTokenizer.from_pretrained(model_path)\nmodel = AutoModel.from_pretrained(model_path).cuda()\nconfig = AutoConfig.from_pretrained(model_path)\nuntrained_model = AutoModel.from_config(config).to('cuda')\n\ntext = \"We introduce a rank-based metric called Diff-eRank, which is rooted in information theory and geometry principles. Diff-eRank evaluates LLMs by examining their hidden representations to quantify how LLMs discard redundant information after training.\" # for example\ninputs = tokenizer(text, return_tensors=\"pt\").to('cuda')\nwith torch.no_grad():\n    R1 = model(inputs.input_ids)[0][0, :, :]\n    R2 = untrained_model(inputs.input_ids)[0][0, :, :]\n    erank1 = compute(R1)\n    erank2 = compute(R2)\n    RD = erank2 - erank1\nprint(RD)\n```\n### Diff-eRank of Single Sentence\n```\ncd utils\n\npython diff_erank_single_sentence.py\n```\n\n### Diff-eRank of Dataset\n\nPlease download the datasets of [wiki-en](https://huggingface.co/datasets/wikipedia), [dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k), [openwebtext2](https://huggingface.co/datasets/suolyer/pile_openwebtext2), [hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf) in huggingface and edit the data path in your scripts.\n\n```\ncd utils\n\npython diff_erank_dataset.py\n```\n\n## Calculate eRank for MLLMs\n\nWe provide an example script to calculate eRank for Qwen2.5-VL. Please check it in ```utils/erank-qwen2_5_vl.py```\n\n## Citation\n\nIf you're using Diff-eRank in your research or applications, please cite using this BibTeX:\n```bibtex\n@inproceedings{weidiff,\n  title={Diff-eRank: A Novel Rank-Based Metric for Evaluating Large Language Models},\n  author={Wei, Lai and Tan, Zhiquan and Li, Chenghai and Wang, Jindong and Huang, Weiran},\n  booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},\n  year={2024}\n}\n```\n","funding_links":[],"categories":["A01_文本生成_文本对话"],"sub_categories":["大语言对话模型及数据"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fwaltonfuture%2FDiff-eRank","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fwaltonfuture%2FDiff-eRank","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fwaltonfuture%2FDiff-eRank/lists"}