https://github.com/alexcoppe/getting_cleaning_data
https://github.com/alexcoppe/getting_cleaning_data
Last synced: 5 days ago
JSON representation
- Host: GitHub
- URL: https://github.com/alexcoppe/getting_cleaning_data
- Owner: alexcoppe
- Created: 2014-11-20T12:40:28.000Z (over 10 years ago)
- Default Branch: master
- Last Pushed: 2014-11-20T21:36:07.000Z (over 10 years ago)
- Last Synced: 2023-05-13T21:40:54.817Z (about 2 years ago)
- Language: R
- Size: 520 KB
- Stars: 0
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
Getting and Cleaning Data Coursera Course
=====================##Project Summary
You should create one R script called run_analysis.R that does the following.1. Merges the training and the test sets to create one data set.
2. Extracts only the measurements on the mean and standard deviation for each measurement.
3. Uses descriptive activity names to name the activities in the data set
4. Appropriately labels the data set with descriptive variable names.
5. From the data set in step 4, creates a second, independent tidy data set with the average of each variable for each activity and each subject.##Script Dependencies
The script depends on plyr. Use `install.packages("plyr")` to install it if it is not already installed in your R distribution.##How to run the script and produce tidy dataset
1. Unzip [zip archive](https://d396qusza40orc.cloudfront.net/getdata%2Fprojectfiles%2FUCI%20HAR%20Dataset.zip) in a directory.
2. Set the same directory as R working directory (`setwd("/home/username/directory_where_you_put_the_zip_file")`).
3. Source the script to obtain a `second.dataset.txt` file (`source("run_analysis.R")`).