{"id":13627546,"url":"https://github.com/ascust/SEC-MXNet","last_synced_at":"2025-04-17T00:31:52.303Z","repository":{"id":117384725,"uuid":"123401612","full_name":"ascust/SEC-MXNet","owner":"ascust","description":"MXNet implementation of SEC","archived":false,"fork":false,"pushed_at":"2018-08-15T01:26:46.000Z","size":137,"stargazers_count":21,"open_issues_count":2,"forks_count":4,"subscribers_count":2,"default_branch":"master","last_synced_at":"2024-08-01T22:40:37.254Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/ascust.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null}},"created_at":"2018-03-01T07:55:03.000Z","updated_at":"2021-07-21T14:08:49.000Z","dependencies_parsed_at":"2024-01-14T08:06:24.116Z","dependency_job_id":null,"html_url":"https://github.com/ascust/SEC-MXNet","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ascust%2FSEC-MXNet","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ascust%2FSEC-MXNet/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ascust%2FSEC-MXNet/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ascust%2FSEC-MXNet/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/ascust","download_url":"https://codeload.github.com/ascust/SEC-MXNet/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":223735013,"owners_count":17194027,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-01T22:00:35.452Z","updated_at":"2024-11-08T18:30:37.372Z","avatar_url":"https://github.com/ascust.png","language":"Python","readme":"# MXNet Implementation of SEC\nThis is a reimplementation of the paper \"Seed, Expand and Constrain: Three Principles for Weakly-Supervised Image \nSegmentation\"([Original Github](https://github.com/kolesman/SEC)). \n\n## Features\n\n1. Compared with the original Caffe version, this version includes all the training codes such as training code for \nforeground cues and background cues. Therefore new dataset can be used.\n\n2. This version supports multi-gpu training, which is much faster than the original Caffe version.\n\n3. Apart from VGG16 base network, Resnet50 is also provided as a backbone.\n\n4. For performance (VOC12 validation), VGG16 version is a bit lower than the score reported in the paper (IOU: 50.2 vs 50.7),\ndue to randomness. \nThe Resnet50 version is much higher than the VGG16 version (IOU: 55.3). \n\n## Dependencies\n\nThe code is implemented in MXNet. Please go to the official website ([HERE](https://mxnet.apache.org)) for installation.\nPlease make sure the MXNet is compiled with OpenCV support. \n\nThe other python dependences can be found in \"dependencies.txt\" and can be installed:\n\n```pip install -r dependencies.txt```\n\n## Dataset\n\nThere are two datasets, PASCAL VOC12([HERE](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar)) and\n SBD([HERE](http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/semantic_contours/benchmark.tgz)).\nExtract them and put them into folder \"dataset\", and then run:\n\n```python create_dataset.py```\n\n## Training\n\nDownload models pretrained on Image-net ([HERE](https://1drv.ms/u/s!ArsE1Wwv6I6dgQGqn_nDGobaSSSf)), extract the files and \nput them into folder \"models\". \n\nIn \"cores.config.py\", the base network can be changed by editing \"conf.BASE_NET\". The other parameters can also be tweaked.\n\nThe training process involves three steps: training fg cues, training bg cues and training SEC model, which are:\n\n```\npython train_bg_cues.py --gpus 0,1,2,3\npython train_fg_cues.py --gpus 0,1,2,3\npython train_SEC.py --gpus 0,1,2,3\n```\n\n## Evaluation\n\nThe snapshots will be saved in folder \"snapshots\". To evaluate a snapshot, simply use (for example epoch 8):\n\n```python eval.py --gpu 0 --epoch 8```\n\nThere are other flags:\n\n```\n--savemask          save output masks\n--crf               use CRF as postprocessing\n--flip              also use flipped images in inference\n```\n\nTrained models can be found at ([vgg16](https://1drv.ms/u/s!ArsE1Wwv6I6dgQJzWjofCuKSjj__) and \n[resnet50](https://1drv.ms/u/s!ArsE1Wwv6I6dgQNwHjkgirojG_zW)). ","funding_links":[],"categories":["\u003ca name=\"Vision\"\u003e\u003c/a\u003e2. Vision"],"sub_categories":["2.3 Image Segmentation"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fascust%2FSEC-MXNet","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fascust%2FSEC-MXNet","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fascust%2FSEC-MXNet/lists"}