{"id":41847722,"url":"https://github.com/camel-lab/codafication","last_synced_at":"2026-01-25T10:04:45.646Z","repository":{"id":247049974,"uuid":"817195885","full_name":"CAMeL-Lab/codafication","owner":"CAMeL-Lab","description":"Code, models, and data for \"Exploiting Dialect Identification in Automatic Dialectal Text Normalization\". ArabicNLP 2024, ACL.","archived":false,"fork":false,"pushed_at":"2024-07-06T07:45:27.000Z","size":3490,"stargazers_count":0,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"master","last_synced_at":"2025-09-09T22:06:27.985Z","etag":null,"topics":["arabic","arabic-nlp","deep-learning","nlp","text-normalization"],"latest_commit_sha":null,"homepage":"https://arxiv.org/abs/2407.03020","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/CAMeL-Lab.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-06-19T08:01:47.000Z","updated_at":"2024-07-22T08:48:27.000Z","dependencies_parsed_at":"2024-07-06T10:03:06.294Z","dependency_job_id":null,"html_url":"https://github.com/CAMeL-Lab/codafication","commit_stats":null,"previous_names":["camel-lab/codafication"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/CAMeL-Lab/codafication","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CAMeL-Lab%2Fcodafication","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CAMeL-Lab%2Fcodafication/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CAMeL-Lab%2Fcodafication/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CAMeL-Lab%2Fcodafication/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/CAMeL-Lab","download_url":"https://codeload.github.com/CAMeL-Lab/codafication/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CAMeL-Lab%2Fcodafication/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28751103,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-25T09:58:17.166Z","status":"ssl_error","status_checked_at":"2026-01-25T09:55:56.104Z","response_time":113,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["arabic","arabic-nlp","deep-learning","nlp","text-normalization"],"created_at":"2026-01-25T10:04:45.516Z","updated_at":"2026-01-25T10:04:45.640Z","avatar_url":"https://github.com/CAMeL-Lab.png","language":"Python","readme":"# CODAfication\n\n\nThis repo contains code and pretrained models to reproduce the results in our paper [Exploiting Dialect Identification\nin Automatic Dialectal Text Normalization](https://arxiv.org/abs/2407.03020).\n\n\n## Requirements:\n\nThe code was written for python\u003e=3.9, pytorch 1.11.1, and transformers 4.22.2. You will need a few additional packages. Here's how you can set up the environment using conda (assuming you have conda and cuda installed):\n\n```bash\ngit clone https://github.com/CAMeL-Lab/codafication.git\ncd coda\n\nconda create -n coda python=3.9\nconda activate coda\n\npip install -r requirements.txt\n```\n\n## Experiments and Reproducibility:\n[data](data): includes all the data we used throughout our paper to train and test various systems. This includes alignments, m2edits, the MADAR CODA Corpus, and all the utilities we used.\n\n[codafication](codafication): includes the scripts needed to train and evaluate our codafication models.\n\n[utils](utils): includes various scripts used for evaluation and statistical significance.\n\n\n## Hugging Face Integration:\nWe make our CODAfication models publicly available on [Hugging Face](https://huggingface.co/collections/CAMeL-Lab/codafication-6687ee4059e2d45fc20ce22b).\n\n```python\nfrom transformers import AutoTokenizer, AutoModelForSeq2SeqLM\nfrom camel_tools.dialectid import DIDModel6\nimport torch\n\nDID = DIDModel6.pretrained()\nDA_PHRASE_MAP = {'BEI': 'في بيروت منقول',\n                 'CAI': 'في القاهرة بنقول',\n                 'DOH': 'في الدوحة نقول',\n                 'RAB': 'في الرباط كنقولو',\n                 'TUN': 'في تونس نقولو'}\n\n\ndef predict_dialect(sent):\n    \"\"\"Predicts the dialect of a sentence using the\n       CAMeL Tools MADAR 6 DID model\"\"\"\n\n    predictions = DID.predict([sent])\n    scores = predictions[0].scores\n\n    if predictions[0].top != \"MSA\":\n        # get the highest pred\n        pred = sorted(scores.items(),\n                      key=lambda x: x[1], reverse=True)[0]\n    else:\n        # get the second highest pred\n        pred = sorted(scores.items(),\n                      key=lambda x: x[1], reverse=True)[1]\n\n    dialect = pred[0]\n    score = pred[1]\n\n    return dialect, score\n\ntokenizer = AutoTokenizer.from_pretrained('CAMeL-Lab/arat5-coda-did')\nmodel = AutoModelForSeq2SeqLM.from_pretrained('CAMeL-Lab/arat5-coda-did')\n\ntext = 'اتنين هامبورجر و اتنين قهوة، لو سمحت. عايزهم تيك اواي.'\n\npred_dialect, _ = predict_dialect(text)\ntext = DA_PHRASE_MAP[pred_dialect] + ' ' + text\n\ninputs = tokenizer(text, return_tensors='pt')\ngen_kwargs = {'num_beams': 5, 'max_length': 200,\n              'num_return_sequences': 1,\n              'no_repeat_ngram_size': 0, 'early_stopping': False\n              }\n\ncodafied_text = model.generate(**inputs, **gen_kwargs)\ncodafied_text = tokenizer.batch_decode(codafied_text,\n                                       skip_special_tokens=True,\n                                       clean_up_tokenization_spaces=False)[0]\n\nprint(codafied_text)\n\"اثنين هامبورجر واثنين قهوة، لو سمحت. عايزهم تيك اوي.\"\n```\n\n\n## License:\n\nThis repo is available under the MIT license. See the [LICENSE](LICENSE) for more info.\n\n## Citation\n\nIf you find the code or data in this repo helpful, please cite our [paper](https://arxiv.org/abs/2407.03020):\n\n```BibTeX\n@inproceedings{alhafni-etal-2024-exploiting,\n    title = \"Exploiting Dialect Identification in Automatic Dialectal Text Normalization\",\n    author = \"Alhafni, Bashar  and\n      Al-Towaity, Sarah  and\n      Fawzy, Ziyad  and\n      Nassar, Fatema and\n      Eryani, Fadhl and\n      Bouamor, Houda and\n      Habash, Nizar\",\n    booktitle = \"Proceedings of ArabicNLP 2024\"\n    month = \"aug\",\n    year = \"2024\",\n    address = \"Bangkok, Thailand\",\n    abstract = \"Dialectal Arabic is the primary spoken language used by native Arabic speakers in daily communication. The rise of social media platforms has notably expanded its use as a written language. However, Arabic dialects do not have standard orthographies. This, combined with the inherent noise in user-generated content on social media, presents a major challenge to NLP applications dealing with Dialectal Arabic. In this paper, we explore and report on the task of CODAfication, which aims to normalize Dialectal Arabic into the Conventional Orthography for Dialectal Arabic (CODA). We work with a unique parallel corpus of multiple Arabic dialects focusing on five major city dialects. We benchmark newly developed pretrained sequence-to-sequence models on the task of CODAfication. We further show that using dialect identification information improves the performance across all dialects. We make our code, data, and pretrained models publicly available.\",\n}\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcamel-lab%2Fcodafication","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fcamel-lab%2Fcodafication","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcamel-lab%2Fcodafication/lists"}