Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/adityak77/masters-thesis
CMU Master's Thesis Files
https://github.com/adityak77/masters-thesis
Last synced: about 1 month ago
JSON representation
CMU Master's Thesis Files
- Host: GitHub
- URL: https://github.com/adityak77/masters-thesis
- Owner: adityak77
- Created: 2023-07-06T20:47:37.000Z (over 1 year ago)
- Default Branch: master
- Last Pushed: 2023-11-13T00:12:45.000Z (about 1 year ago)
- Last Synced: 2023-11-13T01:25:45.943Z (about 1 year ago)
- Language: TeX
- Homepage: http://reports-archive.adm.cs.cmu.edu/anon/2023/abstracts/23-124.html
- Size: 18.9 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- Awesome-Segment-Anything - [code
README
# Learning from Human Videos for Robotic Manipulation
This repository contains the full source code for [Aditya Kannan](https://adityak77.github.io/)'s Master's Thesis [document](http://reports-archive.adm.cs.cmu.edu/anon/2023/abstracts/23-124.html).
## Publications behind this thesis
Some of the content here is behind these publications:
DEFT: Dexterous Fine-Tuning for Real-World Hand Policies
Aditya Kannan*, Kenneth Shaw*, Pragna Mannam, Shikhar Bahl, Deepak Pathak
CoRL 2023
[web]
[arXiv]
[pdf]
[data]---
The experimental source code and data produced for this
thesis are freely available as open source software and
are available in the following repositories.+ [adityak77/**deft-data**](https://github.com/adityak77/deft-data) —
Training dataset for "DEFT: Dexterous Fine-Tuning for Real-World Hand Policies", an approach that fine-tunes an affordance prior from human videos for real-world kitchen tasks.
+ [adityak77/**rewards-from-human-videos**](https://github.com/adityak77/rewards-from-human-videos) —
Our method learns an agent- and domain-agnostic representation that can be applied as a zero-shot reward function for robotic control.
+ [adityak77/**inpainting**](https://github.com/adityak77/inpainting) —
Various segmentation and video inpainting approaches with the objective of use towards reward learning. Accompanies code for Rewards from Human Videos.------
+ This repository is based on
[Ellis Brown](https://ellisbrown.github.io/)'s [thesis repo](https://github.com/ellisbrown/cmu-masters-thesis),
which started from [Brandon Amos](http://bamos.github.io)'s [thesis repo](https://github.com/bamos/thesis),
which was based on [Cyrus Omar's thesis code](https://github.com/cyrus-/thesis),
which, in turn, was based on a CMU thesis template
by [David Koes](http://bits.csb.pitt.edu/)
and others before.------
The BibTeX for this document is:
```bibtex
@mastersthesis{kannan2023learning,
author = {Aditya Kannan},
title = {Learning from Human Videos for Robotic Manipulation},
school = {Carnegie Mellon University},
year = 2023,
month = July,
}
```