{"id":14964632,"url":"https://github.com/seanlee97/angle","last_synced_at":"2025-05-14T22:08:39.446Z","repository":{"id":200909916,"uuid":"706184908","full_name":"SeanLee97/AnglE","owner":"SeanLee97","description":"Train and Infer Powerful Sentence Embeddings with AnglE | 🔥 SOTA on STS and MTEB Leaderboard","archived":false,"fork":false,"pushed_at":"2025-03-16T09:26:29.000Z","size":910,"stargazers_count":536,"open_issues_count":15,"forks_count":38,"subscribers_count":9,"default_branch":"main","last_synced_at":"2025-05-14T22:08:31.727Z","etag":null,"topics":["dense-retrieval","embeddings","information-retrieval","llama","llama2","llm","mteb","rag","retrieval-augmented-generation","semantic-similarity","semantic-textual-similarity","sentence-embedding","sentence-embeddings","sentence-vector","sts","stsbenchmark","text-embedding","text-similarity","text-vector","text2vec"],"latest_commit_sha":null,"homepage":"https://arxiv.org/abs/2309.12871","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/SeanLee97.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-10-17T13:15:23.000Z","updated_at":"2025-05-03T11:34:12.000Z","dependencies_parsed_at":"2023-10-27T13:34:44.620Z","dependency_job_id":"07b065dd-d8cd-439a-965c-c12df612cbcc","html_url":"https://github.com/SeanLee97/AnglE","commit_stats":{"total_commits":265,"total_committers":9,"mean_commits":"29.444444444444443","dds":0.0641509433962264,"last_synced_commit":"42659a716c3ec8faebf41c0b4cf898059fea1c04"},"previous_names":["seanlee97/angle"],"tags_count":42,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SeanLee97%2FAnglE","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SeanLee97%2FAnglE/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SeanLee97%2FAnglE/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SeanLee97%2FAnglE/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/SeanLee97","download_url":"https://codeload.github.com/SeanLee97/AnglE/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254235700,"owners_count":22036964,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["dense-retrieval","embeddings","information-retrieval","llama","llama2","llm","mteb","rag","retrieval-augmented-generation","semantic-similarity","semantic-textual-similarity","sentence-embedding","sentence-embeddings","sentence-vector","sts","stsbenchmark","text-embedding","text-similarity","text-vector","text2vec"],"created_at":"2024-09-24T13:33:32.811Z","updated_at":"2025-05-14T22:08:34.432Z","avatar_url":"https://github.com/SeanLee97.png","language":"Python","readme":"\u003csmall\u003eEN | [简体中文](README_zh.md) \u003c/small\u003e\n\n# AnglE 📐\n\u003e \u003csmall\u003eSponsored by \u003ca href=\"https://www.mixedbread.ai/\"\u003eMixedbread\u003c/a\u003e\u003c/small\u003e\n\n**For more detailed usage, please read the 📘 document:** https://angle.readthedocs.io/en/latest/index.html\n\n\u003ca href=\"https://arxiv.org/abs/2309.12871\"\u003e\n    \u003cimg src=\"https://img.shields.io/badge/Arxiv-2309.12871-yellow.svg?style=flat-square\" alt=\"https://arxiv.org/abs/2309.12871\" /\u003e\n\u003c/a\u003e\n\u003ca href=\"https://pypi.org/project/angle_emb/\"\u003e\n    \u003cimg src=\"https://img.shields.io/pypi/v/angle_emb?style=flat-square\" alt=\"PyPI version\" /\u003e\n\u003c/a\u003e\n\u003ca href=\"https://pypi.org/project/angle_emb/\"\u003e\n    \u003cimg src=\"https://img.shields.io/pypi/dm/angle_emb?style=flat-square\" alt=\"PyPI Downloads\" /\u003e\n\u003c/a\u003e\n\u003ca href=\"https://angle.readthedocs.io/en/latest/index.html\"\u003e\n    \u003cimg src=\"https://readthedocs.org/projects/angle/badge/?version=latest\u0026style=flat-square\" alt=\"Read the docs\" /\u003e\n\u003c/a\u003e\n\n\n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/angle-optimized-text-embeddings/semantic-textual-similarity-on-sick-r-1)](https://paperswithcode.com/sota/semantic-textual-similarity-on-sick-r-1?p=angle-optimized-text-embeddings)\n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/angle-optimized-text-embeddings/semantic-textual-similarity-on-sts16)](https://paperswithcode.com/sota/semantic-textual-similarity-on-sts16?p=angle-optimized-text-embeddings)\n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/angle-optimized-text-embeddings/semantic-textual-similarity-on-sts15)](https://paperswithcode.com/sota/semantic-textual-similarity-on-sts15?p=angle-optimized-text-embeddings)\n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/angle-optimized-text-embeddings/semantic-textual-similarity-on-sts14)](https://paperswithcode.com/sota/semantic-textual-similarity-on-sts14?p=angle-optimized-text-embeddings)\n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/angle-optimized-text-embeddings/semantic-textual-similarity-on-sts13)](https://paperswithcode.com/sota/semantic-textual-similarity-on-sts13?p=angle-optimized-text-embeddings)\n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/angle-optimized-text-embeddings/semantic-textual-similarity-on-sts12)](https://paperswithcode.com/sota/semantic-textual-similarity-on-sts12?p=angle-optimized-text-embeddings)\n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/angle-optimized-text-embeddings/semantic-textual-similarity-on-sts-benchmark)](https://paperswithcode.com/sota/semantic-textual-similarity-on-sts-benchmark?p=angle-optimized-text-embeddings)\n\n📢 **Train/Infer Powerful Sentence Embeddings with AnglE.**\nThis library is from the paper: [AnglE: Angle-optimized Text Embeddings](https://arxiv.org/abs/2309.12871). It allows for training state-of-the-art BERT/LLM-based sentence embeddings with just a few lines of code. AnglE is also a general sentence embedding inference framework, allowing for infering a variety of transformer-based sentence embeddings.\n\n## ✨ Features\n\n**Loss**:\n- 📐 AnglE loss (ACL24)\n- ⚖ Contrastive loss\n- 📏 CoSENT loss\n- ☕️ Espresso loss (ICLR 2025, a.k.a 2DMSE, detail: [README_ESE](README_ESE.md))\n\n**Backbones**:\n- BERT-based models (BERT, RoBERTa, ELECTRA, ALBERT, etc.)\n- LLM-based models (LLaMA, Mistral, Qwen, etc.)\n- Bi-directional LLM-based models (LLaMA, Mistral, Qwen, OpenELMo, etc.. refer to: https://github.com/WhereIsAI/BiLLM)\n\n**Training**:\n- Single-GPU training\n- Multi-GPU training\n\n\n\u003e \u003ca href=\"http://makeapullrequest.com\"\u003e\u003cimg src=\"https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat-square\" alt=\"http://makeapullrequest.com\" /\u003e\u003c/a\u003e \n    More features will be added in the future. \n\n## 🏆 Achievements\n\n📅  May 16, 2024 | Paper \"[AnglE: Angle-optimized Text Embeddings](https://arxiv.org/abs/2309.12871)\" is accepted by ACL 2024 Main Conference.\n\n📅  Mar 13, 2024 | Paper \"[BeLLM: Backward Dependency Enhanced Large Language Model for Sentence Embeddings](https://arxiv.org/abs/2311.05296)\" is accepted by NAACL 2024 Main Conference.\n\n\n📅  Mar 8, 2024 | 🍞 [mixedbread's embedding](https://www.mixedbread.ai/blog/mxbai-embed-large-v1) ([mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1)) achieves SOTA on the [MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard) with an average score of **64.68**! The model is trained using AnglE. Congrats mixedbread!\n\n\n📅  Dec 4, 2023 | Our universal sentence embedding [WhereIsAI/UAE-Large-V1](https://huggingface.co/WhereIsAI/UAE-Large-V1) achieves SOTA on the [MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard) with an average score of **64.64**! The model is trained using AnglE.\n\n\n📅 Dec, 2023 | AnglE achieves SOTA performance on the STS Bechmark Semantic Textual Similarity! \n\n\n## 🤗 Official Pretrained Models\n\nBERT-based models:\n\n|  🤗 HF | Max Tokens | Pooling Strategy | Scenario |\n|----|------|------|------|\n| [WhereIsAI/UAE-Large-V1](https://huggingface.co/WhereIsAI/UAE-Large-V1) | 512 | cls | English, General-purpose |\n| [WhereIsAI/UAE-Code-Large-V1](https://huggingface.co/WhereIsAI/UAE-Code-Large-V1) |  512 | cls | Code Similarity |\n| [WhereIsAI/pubmed-angle-base-en](https://huggingface.co/WhereIsAI/pubmed-angle-base-en) |  512 | cls | Medical Similarity |\n| [WhereIsAI/pubmed-angle-large-en](https://huggingface.co/WhereIsAI/pubmed-angle-large-en) |  512 | cls | Medical Similarity |\n\nLLM-based models:\n\n| 🤗 HF (lora weight) | Backbone | Max Tokens | Prompts |  Pooling Strategy | Scenario  |\n|----|------|------|------|------|------|\n| [SeanLee97/angle-llama-13b-nli](https://huggingface.co/SeanLee97/angle-llama-13b-nli) | NousResearch/Llama-2-13b-hf | 4096 | `Prompts.A` | last token | English, Similarity Measurement | \n| [SeanLee97/angle-llama-7b-nli-v2](https://huggingface.co/SeanLee97/angle-llama-7b-nli-v2) | NousResearch/Llama-2-7b-hf | 4096 | `Prompts.A` | last token | English, Similarity Measurement | \n\n\n**💡 You can find more third-party embeddings trained with AnglE in [HuggingFace Collection](https://huggingface.co/collections/SeanLee97/angle-based-embeddings-669a181354729d168a6ead9b)**\n\n\n## 🚀 Quick Start\n\n### ⬇️ Installation\n\n```bash\npython -m pip install -U angle-emb\n```\n\n### ⌛ Infer BERT-based Model\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1QJcA2Mvive4pBxWweTpZz9OgwvE42eJZ?usp=sharing)\n\n\n1) **With Prompts**: You can specify a prompt with `prompt=YOUR_PROMPT` in `encode` method. If set a prompt, the inputs should be a list of dict or a single dict with key `text`, where `text` is the placeholder in the prompt for the input text. You can use other placeholder names. We provide a set of predefined prompts in `Prompts` class, you can check them via `Prompts.list_prompts()`.\n\n```python\nfrom angle_emb import AnglE, Prompts\nfrom angle_emb.utils import cosine_similarity\n\n\nangle = AnglE.from_pretrained('WhereIsAI/UAE-Large-V1', pooling_strategy='cls').cuda()\n# For retrieval tasks, we use `Prompts.C` as the prompt for the query when using UAE-Large-V1 (no need to specify prompt for documents).\n# When specify prompt, the inputs should be a list of dict with key 'text'\nqv = angle.encode({'text': 'what is the weather?'}, to_numpy=True, prompt=Prompts.C)\ndoc_vecs = angle.encode([\n    'The weather is great!',\n    'it is rainy today.',\n    'i am going to bed'\n], to_numpy=True)\n\nfor dv in doc_vecs:\n    print(cosine_similarity(qv[0], dv))\n```\n\n2) **Without Prompts**: no need to specify a prompt. Just input a list of strings or a single string.\n\n```python\nfrom angle_emb import AnglE\nfrom angle_emb.utils import cosine_similarity\n\n\nangle = AnglE.from_pretrained('WhereIsAI/UAE-Large-V1', pooling_strategy='cls').cuda()\n# for non-retrieval tasks, we don't need to specify prompt when using UAE-Large-V1.\ndoc_vecs = angle.encode([\n    'The weather is great!',\n    'The weather is very good!',\n    'i am going to bed'\n])\n\nfor i, dv1 in enumerate(doc_vecs):\n    for dv2 in doc_vecs[i+1:]:\n        print(cosine_similarity(dv1, dv2))\n```\n\n\n### ⌛ Infer LLM-based Models\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1QJcA2Mvive4pBxWweTpZz9OgwvE42eJZ?usp=sharing)\n\nIf the pretrained weight is a LoRA-based model, you need to specify the backbone via `model_name_or_path` and specify the LoRA path via the `pretrained_lora_path` in `from_pretrained` method. \n\n```python\nimport torch\nfrom angle_emb import AnglE, Prompts\nfrom angle_emb.utils import cosine_similarity\n\nangle = AnglE.from_pretrained('NousResearch/Llama-2-7b-hf',\n                              pretrained_lora_path='SeanLee97/angle-llama-7b-nli-v2',\n                              pooling_strategy='last',\n                              is_llm=True,\n                              torch_dtype=torch.float16).cuda()\nprint('All predefined prompts:', Prompts.list_prompts())\ndoc_vecs = angle.encode([\n    {'text': 'The weather is great!'},\n    {'text': 'The weather is very good!'},\n    {'text': 'i am going to bed'}\n], prompt=Prompts.A)\n\nfor i, dv1 in enumerate(doc_vecs):\n    for dv2 in doc_vecs[i+1:]:\n        print(cosine_similarity(dv1, dv2))\n```\n\n\n### ⌛ Infer BiLLM-based Models\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1QJcA2Mvive4pBxWweTpZz9OgwvE42eJZ?usp=sharing)\n\nSpecify `apply_billm` and `billm_model_class` to load and infer billm models\n\n\n```python\nimport os\n# set an environment variable for billm start index\nos.environ['BiLLM_START_INDEX'] = '31'\n\nimport torch\nfrom angle_emb import AnglE, Prompts\nfrom angle_emb.utils import cosine_similarity\n\n# specify `apply_billm` and `billm_model_class` to load billm models\nangle = AnglE.from_pretrained('NousResearch/Llama-2-7b-hf',\n                              pretrained_lora_path='SeanLee97/bellm-llama-7b-nli',\n                              pooling_strategy='last',\n                              is_llm=True,\n                              apply_billm=True,\n                              billm_model_class='LlamaForCausalLM',\n                              torch_dtype=torch.float16).cuda()\n\ndoc_vecs = angle.encode([\n    {'text': 'The weather is great!'},\n    {'text': 'The weather is very good!'},\n    {'text': 'i am going to bed'}\n], prompt='The representative word for sentence {text} is:\"')\n\nfor i, dv1 in enumerate(doc_vecs):\n    for dv2 in doc_vecs[i+1:]:\n        print(cosine_similarity(dv1, dv2))\n```\n\n\n### ⌛ Infer Espresso/Matryoshka Models\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1QJcA2Mvive4pBxWweTpZz9OgwvE42eJZ?usp=sharing)\n\nSpecify `layer_index` and `embedding_size` to truncate embeddings.\n\n\n```python\nfrom angle_emb import AnglE\nfrom angle_emb.utils import cosine_similarity\n\n\nangle = AnglE.from_pretrained('mixedbread-ai/mxbai-embed-2d-large-v1', pooling_strategy='cls').cuda()\n# truncate layer\nangle = angle.truncate_layer(layer_index=22)\n# specify embedding size to truncate embeddings\ndoc_vecs = angle.encode([\n    'The weather is great!',\n    'The weather is very good!',\n    'i am going to bed'\n], embedding_size=768)\n\nfor i, dv1 in enumerate(doc_vecs):\n    for dv2 in doc_vecs[i+1:]:\n        print(cosine_similarity(dv1, dv2))\n```\n\n### ⌛ Infer Third-party Models\n\nYou can load any transformer-based third-party models such as `mixedbread-ai/mxbai-embed-large-v1`, `sentence-transformers/all-MiniLM-L6-v2`, and `BAAI/bge-large-en-v1.5` using `angle_emb`.\n\nHere is an example:\n\n```python\nfrom angle_emb import AnglE\n\nmodel = AnglE.from_pretrained('mixedbread-ai/mxbai-embed-large-v1', pooling_strategy='cls').cuda()\nvec = model.encode('hello world', to_numpy=True)\nprint(vec)\n```\n\n## Batch Inference\n\nIt is recommended to use Mixedbread's `batched` library to speed up the inference process.\n\n```bash\npython -m pip install batched\n```\n\n```python\nimport batched\nfrom angle_emb import AnglE\n\nmodel = AnglE.from_pretrained(\"WhereIsAI/UAE-Large-V1\", pooling_strategy='cls').cuda()\nmodel.encode = batched.dynamically(model.encode, batch_size=64)\n\nvecs = model.encode([\n    'The weather is great!',\n    'The weather is very good!',\n    'i am going to bed'\n] * 50)\n```\n\n## 🕸️ Custom Train\n\n💡 For more details, please refer to the [training and fintuning](https://angle.readthedocs.io/en/latest/notes/training.html).\n\n\n### 🗂️ 1. Data Prepation\n\nWe currently support three dataset formats:\n\n1) `DatasetFormats.A`: it is a pair format with three columns: `text1`, `text2`, and `label` (0/1).\n\n2) `DatasetFormats.B`: it is a triple format with three columns: `text`, `positive`, and `negative`. `positive` and `negative` store the positive and negative samples of `text`.\n\n3) `DatasetFormats.C`: it is a pair format with two columns: `text`, `positive`. `positive` store the positive sample of `text`.\n\nYou need to prepare your data into huggingface `datasets.Dataset` in one of the formats in terms of your supervised data.\n\n### 🚂 2. Train with CLI [Recommended]\n\nUse `angle-trainer` to train your AnglE model in cli mode. \n\n1) Single gpu training:\n\nUsage: \n\n```bash\nCUDA_VISIBLE_DEVICES=0 angle-trainer --help\n```\n\n2) Multi-gpu training:\n\nUsage:\n\n```bash\nCUDA_VISIBLE_DEVICES=0,1 torchrun --nproc_per_node=2 --master_port=1234 -m angle_emb.angle_trainer --help\n```\n\n### 🚂 3. Custom Train\n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1h28jHvv_x-0fZ0tItIMjf8rJGp3GcO5V?usp=sharing)\n\n\n```python\nfrom datasets import load_dataset\nfrom angle_emb import AnglE, AngleDataTokenizer\n\n\n# 1. load pretrained model\nangle = AnglE.from_pretrained('SeanLee97/angle-bert-base-uncased-nli-en-v1', max_length=128, pooling_strategy='cls').cuda()\n\n# 2. load dataset\n# `text1`, `text2`, and `label` are three required columns.\nds = load_dataset('mteb/stsbenchmark-sts')\nds = ds.map(lambda obj: {\"text1\": str(obj[\"sentence1\"]), \"text2\": str(obj['sentence2']), \"label\": obj['score']})\nds = ds.select_columns([\"text1\", \"text2\", \"label\"])\n\n# 3. transform data\ntrain_ds = ds['train'].shuffle().map(AngleDataTokenizer(angle.tokenizer, angle.max_length), num_proc=8)\nvalid_ds = ds['validation'].map(AngleDataTokenizer(angle.tokenizer, angle.max_length), num_proc=8)\n\n# 4. fit\nangle.fit(\n    train_ds=train_ds,\n    valid_ds=valid_ds,\n    output_dir='ckpts/sts-b',\n    batch_size=32,\n    epochs=5,\n    learning_rate=2e-5,\n    save_steps=100,\n    eval_steps=1000,\n    warmup_steps=0,\n    gradient_accumulation_steps=1,\n    loss_kwargs={\n        'cosine_w': 1.0,\n        'ibn_w': 1.0,\n        'cln_w': 1.0,\n        'angle_w': 0.02,\n        'cosine_tau': 20,\n        'ibn_tau': 20,\n        'angle_tau': 20\n    },\n    fp16=True,\n    logging_steps=100\n)\n\n# 5. evaluate\ncorrcoef = angle.evaluate(ds['test'])\nprint('Spearman\\'s corrcoef:', corrcoef)\n```\n\n### 💡 Others\n\n- To enable `llm` training, please specify `--is_llm 1` and configure appropriate LoRA hyperparameters.\n- To enable `billm` training, please specify `--apply_billm 1` and configure appropriate `billm_model_class` such as `LLamaForCausalLM` (refer to: https://github.com/WhereIsAI/BiLLM?tab=readme-ov-file#usage).\n- To enable espresso sentence embeddings (ESE), please specify `--apply_ese 1` and configure appropriate ESE hyperparameters via `--ese_kl_temperature float` and `--ese_compression_size integer`.\n- To convert the trained AnglE models to `sentence-transformers`, please run `python scripts/convert_to_sentence_transformers.py --help` for more details.\n\n\n## 💡 4. Fine-tuning Tips\n\nFor more details, please refer to the [documentation](https://angle.readthedocs.io/en/latest/notes/training.html#fine-tuning-tips).\n\n1️⃣ If your dataset format is `DatasetFormats.A`, it is recommended to slightly increase the weight for `cosine_w` or slightly decrease the weight for `ibn_w`.\n\n2️⃣ If your dataset format is `DatasetFormats.B`, it is recommended to set `cosine_w` to 0, and set `angle_w` to a small value like 0.02. Be sure to set `cln_w` and `ibn_w`.\n\n3️⃣ If your dataset format is `DatasetFormats.C`, only `ibn_w` and `ibn_tau` are effective. You don't need to tune other parameters.\n\n4️⃣ To alleviate information forgetting in fine-tuning, it is better to specify the `teacher_name_or_path`. If the `teacher_name_or_path` equals `model_name_or_path`, it will conduct self-distillation. **It is worth to note that** `teacher_name_or_path` has to have the same tokenizer as `model_name_or_path`. Or it will lead to unexpected results.\n\n\n## 5. Finetuning and Infering AnglE with `sentence-transformers`\n\n- **Training:** SentenceTransformers also provides a implementation of [AnglE loss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#angleloss). **But it is partially implemented and may not work well as the official code. We recommend to use the official `angle_emb` for fine-tuning AnglE model.**\n\n- **Infering:** If your model is trained with `angle_emb`, and you want to use it with `sentence-transformers`. You can convert it to `sentence-transformers` model using the script `examples/convert_to_sentence_transformers.py`.\n\n\n# 🫡 Citation\n\nYou are welcome to use our code and pre-trained models. If you use our code and pre-trained models, please support us by citing our work as follows:\n\n```bibtex\n@article{li2023angle,\n  title={AnglE-optimized Text Embeddings},\n  author={Li, Xianming and Li, Jing},\n  journal={arXiv preprint arXiv:2309.12871},\n  year={2023}\n}\n```\n\n# 📜 ChangeLogs\n\n| 📅 | Description |\n|----|------|\n| 2024 May 21 |  support Espresso Sentence Embeddings  |\n| 2024 Feb 7 |  support training with only positive pairs (`DatasetFormats.C`)  |\n| 2023 Dec 4 |  Release a universal English sentence embedding model: [WhereIsAI/UAE-Large-V1](https://huggingface.co/WhereIsAI/UAE-Large-V1)  |\n| 2023 Nov 2 |  Release an English pretrained model: `SeanLee97/angle-llama-13b-nli` |\n| 2023 Oct 28 |  Release two chinese pretrained models: `SeanLee97/angle-roberta-wwm-base-zhnli-v1` and `SeanLee97/angle-llama-7b-zhnli-v1`; Add chinese README.md |\n\n# 📧 Contact\n\nIf you have any questions or suggestions, please feel free to contact us via email: xmlee97@gmail.com\n\n# © License\n\nThis project is licensed under the MIT License.\nFor the pretrained models, please refer to the corresponding license of the models.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fseanlee97%2Fangle","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fseanlee97%2Fangle","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fseanlee97%2Fangle/lists"}