Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/aran203/fluxease
Python package for eddy flux data post processing
https://github.com/aran203/fluxease
data-analysis data-science eddy-covariance python
Last synced: 11 days ago
JSON representation
Python package for eddy flux data post processing
- Host: GitHub
- URL: https://github.com/aran203/fluxease
- Owner: Aran203
- Created: 2024-10-24T15:24:35.000Z (4 months ago)
- Default Branch: main
- Last Pushed: 2024-12-09T15:44:32.000Z (2 months ago)
- Last Synced: 2024-12-17T02:11:40.067Z (2 months ago)
- Topics: data-analysis, data-science, eddy-covariance, python
- Language: Python
- Homepage:
- Size: 758 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# fluxease (IN DEVELOPMENT)
## Background
Eddy covariance tower data is fundamental for ecosystem research. These datasets can be broadly categorized into energy and carbon variables. Typically, these raw datasets are recorded at 30-minute or 1-hour intervals, and may often contain gaps or record anomalous values, necessitating usage of post-processing tools. Due to group-specific developments, these tools for processing energy and carbon variables often operate separately. This disparity creates bottlenecks, requiring manual adjustments to translate outputs from one system to another. Therefore, there is a need for a unified platform that seamlessly processes raw data for both carbon and energy variables.## Vision & Current Work
This package is in development stages currently. It builds on top of the `flux-data-qaqc` package (used to post process energy variables) by removing the input text configuration file dependency. We aim to add support for post processing carbon and water variables shortly
## How to Use?
1. Clone repo
2. In the working directory you cloned the repository in, you can run the post processing workflows in a manner (similar to how you would `flux-data-qaqc`) as shown below:```python
from fluxease import FluxData, VeriFlux
import pandas as pddata = pd.read_csv(filename) # read in data
# variable map to map internal names to names as found in the data passed
variable_map = [
("date", "Timestamp"),
("Rn", "NETRAD"),
("H", "H"),
("G", "G",),
("LE", "LE"),
("sw_in", "Rg",),
("sw_out", "SW_OUT"),
("lw_out", "LW_OUT"),
("lw_in","LW_IN" ),
("vpd", "VPD" ),
("t_avg", "Tair"),
("wd", "WD"),
("ws", "WS"),
("ppt", "Precip1_tot"),
('rh', "RH_1_1_1"),
("theta_1", "SWC_1_1_1"),
("theta_2", "SWC_2_1_1")
]demo = FluxData(data, site_elevation, site_latitude, site_longitude, '30T', variable_map)
# 30T corresponds to the latency of the raw dataset (30 min)
# Other units that are supported are H (hours), D (days), W (weeks), M (months), Y (years)demo_ver = VeriFlux(demo)
demo_ver.correct_data(meth = 'ebr')
print(demo_ver.corrected_daily_df['flux_corr']) # printing a column in corrected daily frequency dataframe```
## Contributors
- Karan Bhalla (karanbhalla204 \ tamu \ edu)
- Debasish Mishra (debmishra \ tamu \ edu)