{"id":18317341,"url":"https://github.com/compvis/cuneiform-sign-detection-code","last_synced_at":"2025-07-04T06:35:20.139Z","repository":{"id":65983442,"uuid":"314224603","full_name":"CompVis/cuneiform-sign-detection-code","owner":"CompVis","description":"Code for the article \"Deep learning of cuneiform sign detection with weak supervision using transliteration alignment\"","archived":false,"fork":false,"pushed_at":"2023-08-24T08:49:56.000Z","size":12929,"stargazers_count":9,"open_issues_count":0,"forks_count":6,"subscribers_count":7,"default_branch":"master","last_synced_at":"2025-03-21T12:07:15.858Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/CompVis.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2020-11-19T11:17:37.000Z","updated_at":"2024-07-11T00:42:07.000Z","dependencies_parsed_at":"2023-02-19T19:15:50.218Z","dependency_job_id":null,"html_url":"https://github.com/CompVis/cuneiform-sign-detection-code","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CompVis%2Fcuneiform-sign-detection-code","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CompVis%2Fcuneiform-sign-detection-code/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CompVis%2Fcuneiform-sign-detection-code/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CompVis%2Fcuneiform-sign-detection-code/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/CompVis","download_url":"https://codeload.github.com/CompVis/cuneiform-sign-detection-code/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247406080,"owners_count":20933803,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-05T18:05:50.217Z","updated_at":"2025-04-05T21:32:16.487Z","avatar_url":"https://github.com/CompVis.png","language":"Jupyter Notebook","readme":"# Cuneiform-Sign-Detection-Code\n\nThis repository contains the code for the article:\n\u003eDencker, T., Klinkisch, P., Maul, S. M., and Ommer, B. (2020): Deep Learning of Cuneiform Sign Detection with Weak Supervision using Transliteration Alignment, PLOS ONE, 15:12, , pp. 1–21\n\u003e[https://doi.org/10.1371/journal.pone.0243039](https://doi.org/10.1371/journal.pone.0243039)\n\nThis repository contains code to run the proposed iterative training procedure, and the code to evaluate and visualize the detection results.\nWe also provide the pre-trained models of the cuneiform sign detector for Neo-Assyrian script after completed iterative training on the [Cuneiform Sign Detection Dataset](https://compvis.github.io/cuneiform-sign-detection-dataset/).\nFinally, we make available a web application for the analysis of images of cuneiform clay tablets with the help of a pre-trained cuneiform sign detector.\n\n## Repository description\n\n- General structure:\n    - `data`: tablet images, annotations, transliterations, metadata\n    - `experiments`: training, testing, evaluation and visualization\n    - `lib`: project library code\n    - `results`: generated detections (placed, raw and aligned), network weights, logs\n    - `scripts`: scripts to run the alignment and placement step of iterative training\n\n\n### Use cases\n\n- Pre-processing of training data\n    - line detection\n- Iterative training\n    - generate sign annotations (aligned and placed detections)\n    - sign detector training   \n- Evaluation (on test set)\n    - raw detections\n    - placed detections\n    - aligned detections\n- Test \u0026 visualize\n    - line segmentation and post-processing\n    - line-level and sign-level alignments\n    - TP/FP for raw, aligned and placed detections (full tablet and crop level)\n\n\n### Pre-processing\nAs pre-processing of the training data line detections are obtained for all tablet images before iterative training.\n- use jupyter notebooks (`experiments/line_segmentation/`) for train, eval of line segmentation network and to perform line detection on all tablet images of train set\n\n\n### Training\n*Iterative training* alternates between generating aligned and placed detections and training a new sign detector:\n1. use command-line scripts (`scripts/generate/`) for running alignment and placement step of iterative training\n2. use jupyter notebooks (`experiments/sign_detector/`) for sign detector training step of iterative training\n\nTo keep track of the sign detector and generated sign annotations of each iteration of iterative training (stored in `results/`),\nwe follow the convention to label the sign detector with a *model version* (e.g. v002)\nwhich is also used to label the raw, aligned and placed detections based on this detector.\nBesides providing a model version, a user also selects which subsets of the training data to use for the generation of new annotations.\nIn particular, *subsets of SAAo collections* (e.g. saa01, saa05, saa08) are selected, when running the scripts under `scripts/generate/`.\nTo enable the evaluation on the test set, it is necessary to include the collections (test, saa06).\n\n\n### Evaluation\nUse the [*test sign detector notebook*](./experiments/sign_detector/test_sign_detector.ipynb) in order to test the performance of the trained sign detector (mAP) on the test set or other subsets of the dataset.\nIn `experiments/alignment_evaluation/` you find further notebooks for evaluation and visualization of line-level and sign-level alignments and TP/FP for raw, aligned and placed detections (full tablet and crop level).\n\n\n### Pre-trained models\n\nWe provide pre-trained models in the form of [PyTorch model files](https://pytorch.org/tutorials/beginner/saving_loading_models.html) for the line segmentation network as well as the sign detector.\n\n| Model name     | Model type        | Train annotations  |\n|----------------|-------------------|------------------------|\n| [lineNet_basic_vpub.pth](http://cunei.iwr.uni-heidelberg.de/cuneiformbrowser/model_weights/lineNet_basic_vpub.pth) | line segmentation | 410 lines  |\n\nFor the sign detector, we provide the best weakly supervised model (fpn_net_vA) and the best semi-supervised model (fpn_net_vF).\n\n| Model name     | Model type        | Weak supervision in training  | Annotations in training  |  mAP on test_full  |\n|----------------|-------------------|-------------------|------------------------|------------------------|\n| [fpn_net_vA.pth](http://cunei.iwr.uni-heidelberg.de/cuneiformbrowser/model_weights/fpn_net_vA.pth) | sign detector | saa01, saa05, saa08, saa10, saa13, saa16 | None  | 45.3  |\n| [fpn_net_vF.pth](http://cunei.iwr.uni-heidelberg.de/cuneiformbrowser/model_weights/fpn_net_vF.pth) | sign detector | saa01, saa05, saa08, saa10, saa13, saa16 | train_full (4663 bboxes)  | 65.6  |\n\n\n\n\n### Web application\n\nWe also provide a demo web application that enables a user to apply a trained cuneiform sign detector to a large collection of tablet images.\nThe code of the web front-end is available in the [webapp repo](https://github.com/compvis/cuneiform-sign-detection-webapp/).\nThe back-end code is part of this repository and is located in [lib/webapp/](./lib/webapp/).\nBelow you find a short animation of how the sign detector is used with this web interface.\n\n\n### Cuneiform font\n\nFor visualization of the cuneiform characters, we recommend installing the [Unicode Cuneiform Fonts](https://www.hethport.uni-wuerzburg.de/cuneifont/) by Sylvie Vanseveren.\n\n\n## Installation\n\n#### Software\nInstall general dependencies:\n\n- **OpenGM** with python wrapper - library for discrete graphical models. http://hciweb2.iwr.uni-heidelberg.de/opengm/  \nThis library is needed for the alignment step during training. Testing is not affected. An installation guide for Ubuntu 14.04 can be found [here](./install_opengm.md).\n\n- Python 2.7.X\n\n- Python packages:\n    - torch 1.0\n    - torchvision\n    - scikit-image 0.14.0\n    - pandas, scipy, sklearn, jupyter\n    - pillow, tqdm, tensorboardX, nltk, Levensthein, editdistance, easydict\n\n\nClone this repository and place the [*cuneiform-sign-detection-dataset*](https://github.com/compvis/cuneiform-sign-detection-dataset) in the [./data sub-folder](./data/).\n\n#### Hardware\n\nTraining and evaluation can be performed on a machine with a single GPU (we used a GeFore GTX 1080).\nThe demo web application can run on a web server without GPU support,\nsince detection inference with a lightweight MobileNetV2 backbone is fast even in CPU only mode\n(less than 1s for an image with HD resolution, less than 10s for 4K resolution).\n\n### References\nThis repository also includes external code. In particular, we want to mention:\n\u003e - kuangliu's *torchcv* and *pytorch-cifar* repositories from which we adapted the SSD and FPN detector code:\n https://github.com/kuangliu/pytorch-cifar and\n https://github.com/kuangliu/torchcv\n\u003e - Ross Girshick's *py-faster-rcnn* repository from which we adapted part of our evaluation routine:\n https://github.com/rbgirshick/py-faster-rcnn\n\u003e - Rico Sennrich's *Bleualign* repository from which we adapted part of the Bleualign implementation:\n https://github.com/rsennrich/Bleualign\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcompvis%2Fcuneiform-sign-detection-code","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fcompvis%2Fcuneiform-sign-detection-code","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcompvis%2Fcuneiform-sign-detection-code/lists"}