{"id":13935760,"url":"https://github.com/natasha/razdel","last_synced_at":"2025-04-04T10:03:37.554Z","repository":{"id":41183317,"uuid":"156970335","full_name":"natasha/razdel","owner":"natasha","description":"Rule-based token, sentence segmentation for Russian language","archived":false,"fork":false,"pushed_at":"2023-07-24T09:33:46.000Z","size":39000,"stargazers_count":261,"open_issues_count":5,"forks_count":32,"subscribers_count":14,"default_branch":"master","last_synced_at":"2025-03-28T09:02:23.475Z","etag":null,"topics":["nlp","python","russian","sentence-boundary-detection","sentence-segmentation","tokenization"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/natasha.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2018-11-10T10:23:50.000Z","updated_at":"2025-03-20T17:37:39.000Z","dependencies_parsed_at":"2024-04-27T23:47:06.534Z","dependency_job_id":null,"html_url":"https://github.com/natasha/razdel","commit_stats":{"total_commits":90,"total_committers":4,"mean_commits":22.5,"dds":0.0444444444444444,"last_synced_commit":"25d7433e1289051761835cca8d328fc7bca3e837"},"previous_names":[],"tags_count":1,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/natasha%2Frazdel","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/natasha%2Frazdel/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/natasha%2Frazdel/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/natasha%2Frazdel/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/natasha","download_url":"https://codeload.github.com/natasha/razdel/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247149787,"owners_count":20892014,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["nlp","python","russian","sentence-boundary-detection","sentence-segmentation","tokenization"],"created_at":"2024-08-07T23:02:04.436Z","updated_at":"2025-04-04T10:03:37.528Z","avatar_url":"https://github.com/natasha.png","language":"Python","readme":"\u003cimg src=\"https://github.com/natasha/natasha-logos/blob/master/razdel.svg\"\u003e\n\n![CI](https://github.com/natasha/razdel/actions/workflows/test.yml/badge.svg)\n\n`razdel` — rule-based system for Russian sentence and word tokenization.\n\n## Usage\n\n```python\n\u003e\u003e\u003e from razdel import tokenize\n\n\u003e\u003e\u003e tokens = list(tokenize('Кружка-термос на 0.5л (50/64 см³, 516;...)'))\n\u003e\u003e\u003e tokens\n[Substring(0, 13, 'Кружка-термос'),\n Substring(14, 16, 'на'),\n Substring(17, 20, '0.5'),\n Substring(20, 21, 'л'),\n Substring(22, 23, '(')\n ...]\n \n\u003e\u003e\u003e [_.text for _ in tokens]\n['Кружка-термос', 'на', '0.5', 'л', '(', '50/64', 'см³', ',', '516', ';', '...', ')']\n```\n\n```python\n\u003e\u003e\u003e from razdel import sentenize\n\n\u003e\u003e\u003e text = '''\n... - \"Так в чем же дело?\" - \"Не ра-ду-ют\".\n... И т. д. и т. п. В общем, вся газета\n... '''\n\n\u003e\u003e\u003e list(sentenize(text))\n[Substring(1, 23, '- \"Так в чем же дело?\"'),\n Substring(24, 40, '- \"Не ра-ду-ют\".'),\n Substring(41, 56, 'И т. д. и т. п.'),\n Substring(57, 76, 'В общем, вся газета')]\n```\n\n## Installation\n\n`razdel` supports Python 3.7+ and PyPy 3.\n\n```bash\n$ pip install razdel\n```\n\n## Documentation\n\nMaterials are in Russian:\n\n* \u003ca href=\"https://natasha.github.io/razdel\"\u003eRazdel page on natasha.github.io\u003c/a\u003e \n* \u003ca href=\"https://youtu.be/-7XT_U6hVvk?t=1345\"\u003eRazdel section of Datafest 2020 talk\u003c/a\u003e\n\n## Evaluation\n\nUnfortunately, there is no single correct way to split text into sentences and tokens. For example, one may split `«Как же так?! Захар...» — воскликнут Пронин.` into three sentences `[\"«Как же так?!\",  \"Захар...»\", \"— воскликнут Пронин.\"]` while `razdel` splits it into two `[\"«Как же так?!\", \"Захар...» — воскликнут Пронин.\"]`. What would be the correct way to tokenizer `т.е.`? One may split in into `т.|е.`, `razdel` splits into `т|.|е|.`.\n\n`razdel` tries to mimic segmentation of these 4 datasets: \u003ca href=\"https://github.com/natasha/corus#load_ud_syntag\"\u003eSynTagRus\u003c/a\u003e, \u003ca href=\"https://github.com/natasha/corus#load_morphoru_corpora\"\u003eOpenCorpora\u003c/a\u003e, \u003ca href=\"https://github.com/natasha/corus#load_morphoru_gicrya\"\u003eGICRYA\u003c/a\u003e and \u003ca href=\"https://github.com/natasha/corus#load_morphoru_rnc\"\u003eRNC\u003c/a\u003e. These datasets mainly consist of news and fiction. `razdel` rules are optimized for these kinds of texts. Library may perform worse on other domains like social media, scientific articles, legal documents.\n\nWe measure absolute number of errors. There are a lot of trivial cases in the tokenization task. For example, text `чуть-чуть?!` is not non-trivial, one may split it into `чуть|-|чуть|?|!` while the correct tokenization is `чуть-чуть|?!`, such examples are rare. Vast majority of cases are trivial, for example text `в 5 часов ...` is correctly tokenized even via Python native `str.split` into `в| |5| |часов| |...`. Due to the large number of trivial case overall quality of all segmenators is high, it is hard to compare differentiate between for examlpe 99.33%, 99.95% and 99.88%, so we report the absolute number of errors.\n\n`errors` — number of errors per 1000 tokens/sentencies. For example, consider etalon segmentation is `что-то|?`, prediction is `что|-|то?`, then the number of errors is 3: 1 for missing split `то?` + 2 for extra splits `что|-|то`.\n\n`time` — seconds taken to process whole dataset.\n\n`spacy_tokenize`, `aatimofeev` and others a defined in \u003ca href=\"https://github.com/natasha/naeval/blob/master/naeval/segment/models.py\"\u003enaeval/segment/models.py\u003c/a\u003e, for links to models see \u003ca href=\"https://github.com/natasha/naeval#models\"\u003eNaeval registry\u003c/a\u003e. Tables are computed in \u003ca href=\"https://github.com/natasha/naeval/blob/master/scripts/01_segment/main.ipynb\"\u003enaeval/segment/main.ipynb\u003c/a\u003e.\n\n### Tokens\n\n\u003c!--- token ---\u003e\n\u003ctable border=\"0\" class=\"dataframe\"\u003e\n  \u003cthead\u003e\n    \u003ctr\u003e\n      \u003cth\u003e\u003c/th\u003e\n      \u003cth colspan=\"2\" halign=\"left\"\u003ecorpora\u003c/th\u003e\n      \u003cth colspan=\"2\" halign=\"left\"\u003esyntag\u003c/th\u003e\n      \u003cth colspan=\"2\" halign=\"left\"\u003egicrya\u003c/th\u003e\n      \u003cth colspan=\"2\" halign=\"left\"\u003ernc\u003c/th\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003cth\u003e\u003c/th\u003e\n      \u003cth\u003eerrors\u003c/th\u003e\n      \u003cth\u003etime\u003c/th\u003e\n      \u003cth\u003eerrors\u003c/th\u003e\n      \u003cth\u003etime\u003c/th\u003e\n      \u003cth\u003eerrors\u003c/th\u003e\n      \u003cth\u003etime\u003c/th\u003e\n      \u003cth\u003eerrors\u003c/th\u003e\n      \u003cth\u003etime\u003c/th\u003e\n    \u003c/tr\u003e\n  \u003c/thead\u003e\n  \u003ctbody\u003e\n    \u003ctr\u003e\n      \u003cth\u003ere.findall(\\w+|\\d+|\\p+)\u003c/th\u003e\n      \u003ctd\u003e24\u003c/td\u003e\n      \u003ctd\u003e0.5\u003c/td\u003e\n      \u003ctd\u003e16\u003c/td\u003e\n      \u003ctd\u003e0.5\u003c/td\u003e\n      \u003ctd\u003e19\u003c/td\u003e\n      \u003ctd\u003e0.4\u003c/td\u003e\n      \u003ctd\u003e60\u003c/td\u003e\n      \u003ctd\u003e0.4\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003cth\u003espacy\u003c/th\u003e\n      \u003ctd\u003e26\u003c/td\u003e\n      \u003ctd\u003e6.2\u003c/td\u003e\n      \u003ctd\u003e13\u003c/td\u003e\n      \u003ctd\u003e5.8\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e14\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e4.1\u003c/td\u003e\n      \u003ctd\u003e32\u003c/td\u003e\n      \u003ctd\u003e3.9\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003cth\u003enltk.word_tokenize\u003c/th\u003e\n      \u003ctd\u003e60\u003c/td\u003e\n      \u003ctd\u003e3.4\u003c/td\u003e\n      \u003ctd\u003e256\u003c/td\u003e\n      \u003ctd\u003e3.3\u003c/td\u003e\n      \u003ctd\u003e75\u003c/td\u003e\n      \u003ctd\u003e2.7\u003c/td\u003e\n      \u003ctd\u003e199\u003c/td\u003e\n      \u003ctd\u003e2.9\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003cth\u003emystem\u003c/th\u003e\n      \u003ctd\u003e23\u003c/td\u003e\n      \u003ctd\u003e5.0\u003c/td\u003e\n      \u003ctd\u003e15\u003c/td\u003e\n      \u003ctd\u003e4.7\u003c/td\u003e\n      \u003ctd\u003e19\u003c/td\u003e\n      \u003ctd\u003e3.7\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e14\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e3.9\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003cth\u003emosestokenizer\u003c/th\u003e\n      \u003ctd\u003e\u003cb\u003e11\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e2.1\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e8\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e1.9\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e15\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e1.6\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e16\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e1.7\u003c/b\u003e\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003cth\u003esegtok.word_tokenize\u003c/th\u003e\n      \u003ctd\u003e16\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e2.3\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e8\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e2.3\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e14\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e1.8\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e9\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e1.8\u003c/b\u003e\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003cth\u003eaatimofeev/spacy_russian_tokenizer\u003c/th\u003e\n      \u003ctd\u003e17\u003c/td\u003e\n      \u003ctd\u003e48.7\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e4\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e51.1\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e5\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e39.5\u003c/td\u003e\n      \u003ctd\u003e20\u003c/td\u003e\n      \u003ctd\u003e52.2\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003cth\u003ekoziev/rutokenizer\u003c/th\u003e\n      \u003ctd\u003e\u003cb\u003e15\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e1.1\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e8\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e1.0\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e23\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e0.8\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e68\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e0.9\u003c/b\u003e\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003cth\u003erazdel.tokenize\u003c/th\u003e\n      \u003ctd\u003e\u003cb\u003e9\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e2.9\u003c/td\u003e\n      \u003ctd\u003e9\u003c/td\u003e\n      \u003ctd\u003e2.8\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e3\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e2.0\u003c/td\u003e\n      \u003ctd\u003e16\u003c/td\u003e\n      \u003ctd\u003e2.2\u003c/td\u003e\n    \u003c/tr\u003e\n  \u003c/tbody\u003e\n\u003c/table\u003e\n\u003c!--- token ---\u003e\n\n### Sentences\n\n\u003c!--- sent ---\u003e\n\u003ctable border=\"0\" class=\"dataframe\"\u003e\n  \u003cthead\u003e\n    \u003ctr\u003e\n      \u003cth\u003e\u003c/th\u003e\n      \u003cth colspan=\"2\" halign=\"left\"\u003ecorpora\u003c/th\u003e\n      \u003cth colspan=\"2\" halign=\"left\"\u003esyntag\u003c/th\u003e\n      \u003cth colspan=\"2\" halign=\"left\"\u003egicrya\u003c/th\u003e\n      \u003cth colspan=\"2\" halign=\"left\"\u003ernc\u003c/th\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003cth\u003e\u003c/th\u003e\n      \u003cth\u003eerrors\u003c/th\u003e\n      \u003cth\u003etime\u003c/th\u003e\n      \u003cth\u003eerrors\u003c/th\u003e\n      \u003cth\u003etime\u003c/th\u003e\n      \u003cth\u003eerrors\u003c/th\u003e\n      \u003cth\u003etime\u003c/th\u003e\n      \u003cth\u003eerrors\u003c/th\u003e\n      \u003cth\u003etime\u003c/th\u003e\n    \u003c/tr\u003e\n  \u003c/thead\u003e\n  \u003ctbody\u003e\n    \u003ctr\u003e\n      \u003cth\u003ere.split([.?!…])\u003c/th\u003e\n      \u003ctd\u003e114\u003c/td\u003e\n      \u003ctd\u003e0.9\u003c/td\u003e\n      \u003ctd\u003e53\u003c/td\u003e\n      \u003ctd\u003e0.6\u003c/td\u003e\n      \u003ctd\u003e63\u003c/td\u003e\n      \u003ctd\u003e0.7\u003c/td\u003e\n      \u003ctd\u003e130\u003c/td\u003e\n      \u003ctd\u003e1.0\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003cth\u003esegtok.split_single\u003c/th\u003e\n      \u003ctd\u003e106\u003c/td\u003e\n      \u003ctd\u003e17.8\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e36\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e13.4\u003c/td\u003e\n      \u003ctd\u003e1001\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e1.1\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e912\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e2.8\u003c/b\u003e\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003cth\u003emosestokenizer\u003c/th\u003e\n      \u003ctd\u003e238\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e8.9\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e182\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e5.7\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e80\u003c/td\u003e\n      \u003ctd\u003e6.4\u003c/td\u003e\n      \u003ctd\u003e287\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e7.4\u003c/b\u003e\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003cth\u003enltk.sent_tokenize\u003c/th\u003e\n      \u003ctd\u003e\u003cb\u003e92\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e10.1\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e36\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e5.3\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e44\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e5.6\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e183\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e8.9\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003cth\u003edeeppavlov/rusenttokenize\u003c/th\u003e\n      \u003ctd\u003e\u003cb\u003e57\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e10.9\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e10\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e7.9\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e56\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e6.8\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e119\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e7.0\u003c/b\u003e\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003cth\u003erazdel.sentenize\u003c/th\u003e\n      \u003ctd\u003e\u003cb\u003e52\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e6.1\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e7\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e3.9\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e72\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e4.5\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e\u003cb\u003e59\u003c/b\u003e\u003c/td\u003e\n      \u003ctd\u003e7.5\u003c/td\u003e\n    \u003c/tr\u003e\n  \u003c/tbody\u003e\n\u003c/table\u003e\n\u003c!--- sent ---\u003e\n\n## Support\n\n- Chat — https://telegram.me/natural_language_processing\n- Issues — https://github.com/natasha/razdel/issues\n- Commercial support — https://lab.alexkuk.ru\n\n## Development\n\nDev env\n\n```bash\npython -m venv ~/.venvs/natasha-razdel\nsource ~/.venvs/natasha-razdel/bin/activate\n\npip install -r requirements/dev.txt\npip install -e .\n```\n\nTest\n\n```bash\nmake test\nmake int  # 2000 integration tests\n```\n\nRelease\n\n```bash\n# Update setup.py version\n\ngit commit -am 'Up version'\ngit tag v0.5.0\n\ngit push\ngit push --tags\n```\n\n`mystem` errors on `syntag`\n\n```bash\n# see naeval/data\ncat syntag_tokens.txt | razdel-ctl sample 1000 | razdel-ctl gen | razdel-ctl diff --show moses_tokenize | less\n```\n\nNon-trivial token tests\n\n```bash\npv data/*_tokens.txt | razdel-ctl gen --recall | razdel-ctl diff space_tokenize \u003e tests.txt\npv data/*_tokens.txt | razdel-ctl gen --precision | razdel-ctl diff re_tokenize \u003e\u003e tests.txt\n```\n\nUpdate integration tests\n\n```bash\ncd tests/data/\npv sents.txt | razdel-ctl up sentenize \u003e t; mv t sents.txt\n```\n\n`razdel` and `moses` diff\n\n```bash\ncat data/*_tokens.txt | razdel-ctl sample 1000 | razdel-ctl gen | razdel-ctl up tokenize | razdel-ctl diff moses_tokenize | less\n```\n\n`razdel` performance\n\n```bash\ncat data/*_tokens.txt | razdel-ctl sample 10000 | pv -l | razdel-ctl gen | razdel-ctl diff tokenize | wc -l\n```\n","funding_links":[],"categories":["Python"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fnatasha%2Frazdel","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fnatasha%2Frazdel","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fnatasha%2Frazdel/lists"}