{"id":13559343,"url":"https://github.com/carrierlxk/COSNet","last_synced_at":"2025-04-03T14:32:01.154Z","repository":{"id":75069833,"uuid":"178267179","full_name":"carrierlxk/COSNet","owner":"carrierlxk","description":"See More, Know More: Unsupervised Video Object Segmentation with Co-Attention Siamese Networks (CVPR19)","archived":false,"fork":false,"pushed_at":"2021-02-25T13:06:54.000Z","size":1491,"stargazers_count":320,"open_issues_count":16,"forks_count":61,"subscribers_count":11,"default_branch":"master","last_synced_at":"2024-11-04T10:44:08.094Z","etag":null,"topics":["attention-siamese-networks","co-attention","cvpr2019","object-segmentation","segmentation","video-object-segmentation","video-segmentation"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/carrierlxk.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2019-03-28T19:11:59.000Z","updated_at":"2024-02-26T17:27:31.000Z","dependencies_parsed_at":"2023-10-20T19:01:19.395Z","dependency_job_id":null,"html_url":"https://github.com/carrierlxk/COSNet","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/carrierlxk%2FCOSNet","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/carrierlxk%2FCOSNet/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/carrierlxk%2FCOSNet/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/carrierlxk%2FCOSNet/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/carrierlxk","download_url":"https://codeload.github.com/carrierlxk/COSNet/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247018562,"owners_count":20870022,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["attention-siamese-networks","co-attention","cvpr2019","object-segmentation","segmentation","video-object-segmentation","video-segmentation"],"created_at":"2024-08-01T13:00:19.389Z","updated_at":"2025-04-03T14:32:00.814Z","avatar_url":"https://github.com/carrierlxk.png","language":"Python","readme":"# COSNet\nCode for CVPR 2019 paper: \n\n[See More, Know More: Unsupervised Video Object Segmentation with\nCo-Attention Siamese Networks](http://openaccess.thecvf.com/content_CVPR_2019/papers/Lu_See_More_Know_More_Unsupervised_Video_Object_Segmentation_With_Co-Attention_CVPR_2019_paper.pdf)\n\n[Xiankai Lu](https://sites.google.com/site/xiankailu111/), [Wenguan Wang](https://sites.google.com/view/wenguanwang), Chao Ma, Jianbing Shen, Ling Shao, Fatih Porikli\n\n##\n\n![](../master/framework.png)\n\n- - -\n:new:\n\nOur group co-attention achieves a further performance gain (81.1 mean J on DAVIS-16 dataset), related codes have also been released.\n\nThe pre-trained model, testing and training code:\n\n### Quick Start\n\n#### Testing\n\n1. Install pytorch (version:1.0.1).\n\n2. Download the pretrained model. Run 'test_coattention_conf.py' and change the davis dataset path, pretrainde model path and result path.\n\n3. Run command: python test_coattention_conf.py --dataset davis --gpus 0\n\n4. Post CRF processing code comes from: https://github.com/lucasb-eyer/pydensecrf. \n\nThe pretrained weight can be download from [GoogleDrive](https://drive.google.com/open?id=14ya3ZkneeHsegCgDrvkuFtGoAfVRgErz) or [BaiduPan](https://pan.baidu.com/s/16oFzRmn4Meuq83fCYr4boQ), pass code: xwup.\n\nThe segmentation results on DAVIS, FBMS and Youtube-objects can be download from DAVIS_benchmark(https://davischallenge.org/davis2016/soa_compare.html) or\n[GoogleDrive](https://drive.google.com/open?id=1JRPc2kZmzx0b7WLjxTPD-kdgFdXh5gBq) or [BaiduPan](https://pan.baidu.com/s/11n7zAt3Lo2P3-42M2lsw6Q), pass code: q37f.\n\nThe youtube-objects dataset can be downloaded from [here](http://calvin-vision.net/datasets/youtube-objects-dataset/) and annotation can be found [here](http://vision.cs.utexas.edu/projects/videoseg/data_download_register.html).\n\nThe FBMS dataset can be downloaded from [here](https://lmb.informatik.uni-freiburg.de/resources/datasets/moseg.en.html).\n#### Training\n\n1. Download all the training datasets, including MARA10K and DUT saliency datasets. Create a folder called images and put these two datasets into the folder. \n\n2. Download the deeplabv3 model from [GoogleDrive](https://drive.google.com/open?id=1hy0-BAEestT9H4a3Sv78xrHrzmZga9mj). Put it into the folder pretrained/deep_labv3.\n\n3. Change the video path, image path and deeplabv3 path in train_iteration_conf.py.  Create two txt files which store the saliency dataset name and DAVIS16 training sequences name. Change the txt path in PairwiseImg_video.py.\n\n4. Run command: python train_iteration_conf.py --dataset davis --gpus 0,1\n\n### Citation\n\nIf you find the code and dataset useful in your research, please consider citing:\n```\n@InProceedings{Lu_2019_CVPR,  \nauthor = {Lu, Xiankai and Wang, Wenguan and Ma, Chao and Shen, Jianbing and Shao, Ling and Porikli, Fatih},  \ntitle = {See More, Know More: Unsupervised Video Object Segmentation With Co-Attention Siamese Networks},  \nbooktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},  \nyear = {2019}  \n}\n@article{lu2020_pami,\n  title={Zero-Shot Video Object Segmentation with Co-Attention Siamese Networks},\n  author={Lu, Xiankai and Wang, Wenguan and Shen, Jianbing and Crandall, David and Luo, Jiebo},\n  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},\n  year={2020},\n  publisher={IEEE}\n}\n```\n### Other related projects/papers:\n[Saliency-Aware Geodesic Video Object Segmentation (CVPR15)](https://github.com/wenguanwang/saliencysegment)\n\n[Learning Unsupervised Video Primary Object Segmentation through Visual Attention (CVPR19)](https://github.com/wenguanwang/AGS)\n\nAny comments, please email: carrierlxk@gmail.com\n","funding_links":[],"categories":["Co-salient Object Detection","Video Understanding"],"sub_categories":["2020","3D SemanticSeg"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcarrierlxk%2FCOSNet","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fcarrierlxk%2FCOSNet","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcarrierlxk%2FCOSNet/lists"}