https://github.com/layumi/more2024
Multimedia Object Re-identification Workshop (MORE) @ ICMR2024
https://github.com/layumi/more2024
Last synced: 6 months ago
JSON representation
Multimedia Object Re-identification Workshop (MORE) @ ICMR2024
- Host: GitHub
- URL: https://github.com/layumi/more2024
- Owner: layumi
- Created: 2024-01-04T15:38:19.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2025-01-19T14:44:19.000Z (9 months ago)
- Last Synced: 2025-02-11T17:59:15.178Z (8 months ago)
- Size: 107 KB
- Stars: 2
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
---
title: "MORE 2024"
collection: pages
permalink: /MORE2024
author_profile: false
---
ACM International Conference on Multimedia Retrieval
Workshop on
Multimedia Object Re-ID: Advancements, Challenges, and Opportunities (MORE 2024)
![]()
The accept papers will be published at ACM ICMR Workshop (top 50%), and go through the same peer review process as the regular papers. Several authors will be invited to do a oral presentation.
[[Accepted Workshop Proposal]](https://zdzheng.xyz/files/ICMR24_Workshop_Object_Re_ID.pdf)
[[Submission Site]](https://openreview.net/group?id=ACM.org/ICMR/2024/Workshop/MORE)## News
- Good papers will be recommended to [ACM TOMM Special Issue](https://dl.acm.org/pb-assets/static_journal_pages/tomm/pdf/ACM-SI_ToMM_MMGR-1708635711467.pdf). (Re-submittion is needed.)
- Paper submission site is open.## Recorded Video
[【Youtube】](https://youtu.be/HNpVVGRci_Q )[【Bilibili】](https://www.bilibili.com/video/BV1TS411N7fi/?vd_source=38510c91e89a1f1ac248579bb05789d1)
## Tentative Schedule (Bangkok Time, GMT+7, 10th June)
- 09:00am-09.10am Opening Remarks **Zhedong Zheng** (University of Macau)
- 09.10am-09.40am Lifelong Person Re-identification **Nan Pu** (University of Trento)
- 09.45am-10.15am Privacy-protected Person Re-identification **Yutian Lin** (Wuhan University)
Coffee Break- 10.30am-11.00am Few-shot Learning from Meta-learning to Efficient Tuning **Zhihe Lu** (NUS)
- 11.00am-11.15am Exploring Part Features for Unsupervised Visible-Infrared Person Re-Identification (Workshop Oral)
- 11.15am-11.30am Refining Video-Based Person Re-Identification: An Integrated Framework with Facial and Body Cues (Workshop Oral)
- 11.30am-11.45am Analytical Study of DreamGaussian and MTN in the Field of 3D Generation (Workshop Oral)## Abstract
Object re-identification (or object re-ID) has gained significant attention in recent years, fueled by the increasing demand for advanced video analysis and safety systems. In object
re-identification, a query can be of different modalities, such as an image, a video, or natural language, containing or describing the object of interest.
This workshop aims to bring together researchers, practitioners, and enthusiasts interested in object re-identification to delve into the latest advancements, challenges, and opportunities in this dynamic field. The workshop covers a spectrum of topics related to object re-identification, including but not limited to deep metric learning, multi-view data generation, video-based object re-identification, cross-domain object re-identification and real-world applications.
The workshop provides a platform for researchers to showcase their work, exchange ideas, and foster potential collaborations. Additionally, it serves as a valuable opportunity for practitioners to stay abreast of the latest developments in object re-identification technology.
Overall, this workshop creates a unique space to explore the rapidly evolving field of object re-identification and its profound impact on advancing the capabilities of multimedia analysis and retrieval.**Key Words** Multimedia Retrieval, Object Re-identification, Representation Learning, Deep Metric Learning, Multi-view Generation
**The list of possible topics includes, but is not limited to:**
* New Datasets and Benchmarks
* Deep Metric Learning
* Multi-view Data Generation
* Video-based Object Re-identification
* Cross-domain Object Re-identification
* Object Re-identification Domain Adaptation / Generalization
* Single/ Multiple Object Tracking
* Object Geo-localization
* Multimedia Re-ranking## Submission
Submission Site is at [OpenReview](https://openreview.net/group?id=ACM.org/ICMR/2024/Workshop/MORE&referrer=%5BHomepage%5D(%2F)#tab-your-consoles)
Submission template can be found at [ACM](https://www.acm.org/publications/proceedings-template) or you may directly follow the [overleaf template](https://www.overleaf.com/read/yfpxtyngmzjn).
## Submission Type
**(1).** Original papers (up to 4 pages in length, plus unlimited pages for references): original solution to the tasks in the scope of workshop topics and themes.**(2).** Position or perspective papers (up to 4 pages in length, plus unlimited pages for references): original ideas, perspectives, research vision, and open challenges in the area of evaluation approaches for explainable multimedia systems;
**(3).** Survey papers (up to 4 pages in length, plus unlimited pages for references): papers summarizing existing publications in leading conferences and high-impact journals that are relevant for the topic of the workshop;
Page limits include diagrams and appendices.Submissions should be single-blind due to limited publication time, written in English, and formatted according to the current ACM two-column conference format.
Suitable LaTeX, Word, and Overleaf templates are available from the ACM Website (use “sigconf” proceedings template for LaTeX and the Interim Template for Word).## Important Dates
**Submission of papers:**
* Workshop Papers Submission: 16 April 2024
* Workshop Papers Notification: 22 April 2024
* Camera-ready Submission: 25 April, 2024
* Student Travel Grant DDL: 25 April, 2024 https://icmr2024.org/student-travel-grants.html
* Workshop Dates: 10 June 2024Please note: The submission deadline is at 11:59 p.m. of the stated deadline date [Anywhere on Earth](https://time.is/Anywhere_on_Earth)
### Tips:
* For privacy protection, please blur faces in the published materials (such as paper, video, poster, etc.)
* For social good, please do not contain any misleading words, such as `surveillance` and `secret`.## Organizing Team
|
|
|
|
| :-: | :-: | :-: |
| [Zhedong Zheng](https://zdzheng.xyz), University of Macau, China | [Yaxiong Wang](https://dblp.org/pid/202/3251.html), Hefei University of Technology, China | [Xuelin Qian](https://naiq.github.io/), Northwestern Polytechnical University, China |
||
|
|
| [Zhun Zhong](https://zhunzhong.site), University of Nottingham, United Kingdom | [Zheng Wang](https://wangzwhu.github.io/home/), Wuhan University, China | [Liang Zheng](https://zheng-lab.cecs.anu.edu.au), Australian National University, Australia |## Workshop Citation
```
@inproceedings{zheng2024MORE,
title={MORE'24 Multimedia Object Re-ID: Advancements, Challenges, and Opportunities},
author={Zheng, Zhedong and Wang, Yaxiong and Qian, Xuelin, and Zhong, Zhun and Wang, Zheng and Zheng, Liang},
booktitle={ACM International Conference on Multimedia Retrieval Workshop},
year={2024}
}
```