{"id":21513822,"url":"https://github.com/raul-dc/crypto-etl","last_synced_at":"2025-08-17T05:06:55.298Z","repository":{"id":262534436,"uuid":"887180850","full_name":"Raul-DC/Crypto-ETL","owner":"Raul-DC","description":"Data pipeline for processing cryptocurrency data using Azure Blob Storage, Databricks, Spark and downloading results via the Databricks CLI.","archived":false,"fork":false,"pushed_at":"2025-03-27T07:55:55.000Z","size":74,"stargazers_count":0,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-08-05T04:26:18.268Z","etag":null,"topics":["api","cli","cryptocurrency","csv","etl-pipeline","java","jupyter-notebook"],"latest_commit_sha":null,"homepage":"","language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Raul-DC.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2024-11-12T09:59:13.000Z","updated_at":"2025-03-27T07:55:58.000Z","dependencies_parsed_at":"2024-11-13T00:23:35.369Z","dependency_job_id":"de43b8f4-27e8-4b69-8a2d-5f0bad2fad1c","html_url":"https://github.com/Raul-DC/Crypto-ETL","commit_stats":null,"previous_names":["raul-dc/crypto-etl"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/Raul-DC/Crypto-ETL","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Raul-DC%2FCrypto-ETL","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Raul-DC%2FCrypto-ETL/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Raul-DC%2FCrypto-ETL/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Raul-DC%2FCrypto-ETL/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Raul-DC","download_url":"https://codeload.github.com/Raul-DC/Crypto-ETL/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Raul-DC%2FCrypto-ETL/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":270807934,"owners_count":24649346,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-08-17T02:00:09.016Z","response_time":129,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["api","cli","cryptocurrency","csv","etl-pipeline","java","jupyter-notebook"],"created_at":"2024-11-23T23:18:04.352Z","updated_at":"2025-08-17T05:06:55.262Z","avatar_url":"https://github.com/Raul-DC.png","language":"Jupyter Notebook","readme":"---\n\n# \u003cp align=\"center\"\u003e📊 Crypto-ETL ♻️\u003c/p\u003e\n\n. CryptoETL is a Java-based ETL (Extract, Transform, Load) project that pulls cryptocurrency data from the CoinGecko API 🦎 and processes it then it stores the results in a CSV file. After that the data is further analyzed using Azure Databricks for insights and the process concludes by using the Databricks CLI to download the CSV file locally for further use or reporting. This project showcases data engineering skills, Java language usage, API interaction, Azure storage usage, and data analysis with Databricks (using Jupyter notebooks) and the Databricks CLI for efficient data management and file handling.\n\n![image](https://github.com/user-attachments/assets/1e58745a-d8c7-4051-84de-350559c42cad)\n\n\u003cp align=\"center\"\u003e(You can click on the image to enlarge it)\u003c/p\u003e\n\n---\n\n## \u003cp align=\"center\"\u003e📃 Table of Contents 📃\u003c/p\u003e\n- [Installation](#installation)\n- [Project Setup](#project-setup)\n- [API Interaction](#api-interaction)\n- [Azure Setup](#azure-setup)\n- [Databricks Cluster Setup](#databricks-cluster-setup)\n- [Running the ETL Process](#running-the-etl-process)\n- [Jupyter Notebook](#jupyter-notebook)\n- [Downloading CSV File](#downloading-csv-file)\n- [Troubleshooting](#troubleshooting)\n\n---\n\n## \u003cp align=\"center\"\u003eInstallation\u003c/p\u003e\n\n1. **Java Installation**\n   \n   _Ensure that Java is installed on your machine. You can download it_ [here](https://www.oracle.com/java/technologies/javase-downloads.html). ⬅️\n\n\t ![image](https://github.com/user-attachments/assets/390f9270-4eac-44b2-970e-d1e04faf2529)\n\n\n2. **IntelliJ IDEA**\n    \n   _Download and install IntelliJ IDEA (Community Edition) from_ [this link](https://www.jetbrains.com/idea/). ⬅️\n\n  ![image](https://github.com/user-attachments/assets/7071220b-9569-4ffc-877f-4b8a24f94797)\n\t![image](https://github.com/user-attachments/assets/a36732f7-232e-41b6-817e-aa8e3e98a99f)\n\n3. **Maven Installation**\n   \n   _Download Maven from this_ [page](https://maven.apache.org/download.cgi) _and extract it to a folder such as_ `C:\\Program Files\\Apache\\Maven\\apache-maven-3.9.9`. _Configure your environment variables accordingly to ensure Maven is added to your system's PATH._\n\n   ![image](https://github.com/user-attachments/assets/f90123ba-98d4-4e99-ab68-f23ee47a7a2b)\n\n\u003cp align=\"center\"\u003e(You can click on the image to enlarge it)\u003c/p\u003e\n\n4. **Verify Maven Installation**\n   \n   _Open a terminal (or command prompt) and run:_\n   ```bash\n   mvn -v\n   ```\n   \u003cp =\"center\"\u003eThis should display the Maven version and Java environment details.\u003c/p\u003e\n   \n---\n\n## \u003cp align=\"center\"\u003eProject Setup\u003c/p\u003e\n\n1. **Create Maven Project**\n   \n   _Open IntelliJ IDEA and create a new Maven project using the_ `maven-archetype-quickstart`. _Set the_ `GroupId` _to_ `com.user.cryptoetl` ⚠️ _(change 'user' for your name) and_ `ArtifactId` _to_ `crypto-etl`.\n\n   ![image](https://github.com/user-attachments/assets/ca2f0863-9f32-4ef0-b1da-2f6a0d9a3573)\n\n\n2. **Dependencies**\n   \n   _Add the following dependencies to your_ `pom.xml`:\n\n   - **Testing dependency**\n   ```xml\n   \u003cdependency\u003e\n      \u003cgroupId\u003ejunit\u003c/groupId\u003e\n      \u003cartifactId\u003ejunit\u003c/artifactId\u003e\n      \u003cversion\u003e3.8.1\u003c/version\u003e\n      \u003cscope\u003etest\u003c/scope\u003e\n    \u003c/dependency\u003e\n   ```\n   \n   - **Dependency for handling HTTP requests**\n   ```xml\n   \u003cdependency\u003e\n      \u003cgroupId\u003ecom.konghq\u003c/groupId\u003e\n      \u003cartifactId\u003eunirest-java\u003c/artifactId\u003e\n      \u003cversion\u003e3.14.5\u003c/version\u003e\n    \u003c/dependency\u003e\n   ```\n\n   - **Dependency for handling JSON**\n   ```xml\n   \u003cdependency\u003e\n      \u003cgroupId\u003ecom.googlecode.json-simple\u003c/groupId\u003e\n      \u003cartifactId\u003ejson-simple\u003c/artifactId\u003e\n      \u003cversion\u003e1.1.1\u003c/version\u003e\n    \u003c/dependency\u003e\n   ```\n\n   - **SLF4J's logging dependency**\n   ```xml\n   \u003cdependency\u003e\n      \u003cgroupId\u003eorg.slf4j\u003c/groupId\u003e\n      \u003cartifactId\u003eslf4j-api\u003c/artifactId\u003e\n      \u003cversion\u003e2.0.0\u003c/version\u003e \u003c!-- Change the version if necessary --\u003e\n    \u003c/dependency\u003e\n    \u003cdependency\u003e\n      \u003cgroupId\u003eorg.slf4j\u003c/groupId\u003e\n      \u003cartifactId\u003eslf4j-simple\u003c/artifactId\u003e\n      \u003cversion\u003e2.0.0\u003c/version\u003e \u003c!-- Change the version if necessary --\u003e\n    \u003c/dependency\u003e\n   ```\n   \n   - **Azure's SDK for Blob Storage**:\n   ```xml\n   \u003cdependency\u003e\n     \u003cgroupId\u003ecom.azure\u003c/groupId\u003e\n     \u003cartifactId\u003eazure-storage-blob\u003c/artifactId\u003e\n     \u003cversion\u003e12.14.0\u003c/version\u003e\n   \u003c/dependency\u003e\n   ```\n\n   - **dotenv for environment variables**:\n   ```xml\n   \u003cdependency\u003e\n     \u003cgroupId\u003eio.github.cdimascio\u003c/groupId\u003e\n     \u003cartifactId\u003edotenv-java\u003c/artifactId\u003e\n     \u003cversion\u003e5.2.2\u003c/version\u003e\n   \u003c/dependency\u003e\n   ```\n\n---\n\n## \u003cp align=\"center\"\u003eAPI Interaction\u003c/p\u003e\n\n1. **CoinGecko API**\n   \n   _Register on_ [CoinGecko](https://www.coingecko.com/en/api) _to get your API key for cryptocurrency data. You can access your API dashboard_ [here](https://www.coingecko.com/en/developers/dashboard). ⬅️\n\n   ![image](https://github.com/user-attachments/assets/04d41c20-516a-4f11-a2ef-57fc460dfedf)\n\n\n2. **Endpoints**  \n   _Use the_ `/coins/markets` _endpoint to pull cryptocurrency data. For documentation, visit_ [CoinGecko API Docs](https://docs.coingecko.com/v3.0.1/reference/coins-markets).\n\n   \u003cp =\"center\"\u003eThe documentation has an integrated AI that is surprisingly helpful.\u003c/p\u003e\n\n---\n\n## \u003cp align=\"center\"\u003eAzure Setup\u003c/p\u003e\n\n1. **Azure Account**\n    \n   _Sign up for an Azure account at_ [azure.microsoft.com](https://azure.microsoft.com). ⬅️\n\n2. **Blob Storage Setup**\n   \n   _Create a Blob Storage container on Azure with minimal security and cost-efficient settings. Modify your_ `CryptoETL.java` _file to include methods for uploading files to Azure Blob._\n\n   ```java\n   AZURE_CONNECTION_STRING=your_connection_string\n   AZURE_CONTAINER_NAME=your_container_name\n   ```\n\n   ![image](https://github.com/user-attachments/assets/d0fb627f-c106-4dd7-b02b-ae1de573ea65)\n\n\n---\n\n## \u003cp align=\"center\"\u003eDatabricks Cluster Setup\u003c/p\u003e\n\n1. **Create Databricks Cluster**\n   \n   _Create a cluster with the following configuration:_\n   - Runtime: `15.4.x-scala2.12`\n   - Node: `Standard_DC4as_v5`\n   - Driver: `16 GB Memory, 4 Cores`\n\n   ![image](https://github.com/user-attachments/assets/0637907a-dfb2-4aa4-8d76-4059f82b1b65)\n\n2. **Check Node Availability**\n   \n   _Use the Azure CLI to check available nodes in your region:_\n   ```bash\n   az vm list-skus --location centralus --size Standard_D --output table\n   ```\n   _If you want to check the documentation click_ [here](https://learn.microsoft.com/en-us/azure/azure-resource-manager/troubleshooting/error-sku-not-available?tabs=azure-cli#code-try-4) ⬅️\n\n---\n\n## \u003cp align=\"center\"\u003eRunning the ETL Process\u003c/p\u003e\n\n1. **Run CryptoETL.java** ☕\n   \n   _Run the_ `CryptoETL.java` _file, which will:_\n\n   \t- Pull data from the CoinGecko API ↙️ (extract)\n    - Save the data as a CSV file ♻️ (transform)\n    - Upload the CSV to Azure Blob Storage ☁️ (load)\n\n2. **Databricks Notebook** 📙\n    \n   _Open the Databricks workspace, setup the account credentials, load the csv files and run the notebook for further analysis. More info in the section below_ ⬇️\n\n   ![image](https://github.com/user-attachments/assets/5ff1a523-c929-44e5-9beb-16940c8f38c5)\n\n---\n\n## \u003cp align=\"center\"\u003eJupyter Notebook\u003c/p\u003e\n\n1. **Spark Enviroment Settings**\n\n   _This will let you set a Spark enviroment which will let Spark access the Blob Storage on Azure using an account key:_\n   \n   ```python\n   spark.conf.set(\n    \"fs.azure.account.key.\u003cstorage_account_name\u003e.blob.core.windows.net\",\n    \"\u003cyour_account_key\u003e\"\n\t )\n   ```\n   \u003cp align=\"center\"\u003eTo fill it you will need to know your *Container's name* and your *Account Key*.\u003c/p\u003e\n\t \n2. **Load Data into Databricks**\n   \n   _Load the CSV file into Databricks for further processing:_\n   \n   ```python\n   df = spark.read.csv(\"wasbs://\u003cyour_container_name\u003e@\u003cyour_storage_account_name\u003e.blob.core.windows.net/\u003cyour_csv_file_path\u003e\", header=True, inferSchema=True)\n   print(f\"Number of rows in the DataFrame: {df.count()}\")\n   print(f\"Number of partitions: {df.rdd.getNumPartitions()}\")\n   df.display()\n   ```\n   \u003cp align=\"center\"\u003eTo fill it you will need to know your *Container's name* and the *Storage Account's name* and for the file path you could use *.csv and it will select all files in the storage.\u003c/p\u003e\n\n3. **Filter and Analyze Data**\n   \n   Perform various filtering and analysis on the cryptocurrency data, such as:\n   \n   - *Filter by Current Price greater than 2000 USD.*\n   - *Filter by Current Price less than 50 USD.*\n   - *Filter by positive 24h Change.*\n   - *Filter by negative 24h Change.*\n   - *Filter by Name containing \"Bitcoin\".*\n   - *Filter by Name specifically \"Ethereum\".*\n   - *Filter by Name specifically \"Litecoin\".*\n\n   ```python\n   # Filter 1: Filter by prices greater than 2000 USD\n   filtered_df_1 = df.filter(df[\"`Current Price`\"] \u003e 2000)\n   print(\"Filtered by Current Price \u003e 2000:\")\n   display(filtered_df_1)\n\n   # Filter 2: Filter by prices less than 101 USD\n   filtered_df_2 = df.filter(df[\"`Current Price`\"] \u003c 101)\n   print(\"Filtered by Current Price \u003c 101:\")\n   display(filtered_df_2)\n\n   # Filter 3: Filter by positive 24h Change (if the column exists)\n   filtered_df_3 = df.filter(df[\"`24h Change`\"] \u003e 0)\n   print(\"Filtered by 24h Change \u003e 0:\")\n   display(filtered_df_3)\n\n   # Filter 4: Filter by negative 24h Change (if the column exists)\n   filtered_df_4 = df.filter(df[\"`24h Change`\"] \u003c 0)\n   print(\"Filtered by 24h Change \u003c 0:\")\n   display(filtered_df_4)\n\n   # Filter 5: Filter by coins containing \"Bitcoin\" in the Name\n   filtered_df_5 = df.filter(df[\"Name\"].like(\"%Bitcoin%\"))\n   print(\"Filtered by Name containing 'Bitcoin':\")\n   display(filtered_df_5)\n\n   # Filter 6: Filter by coins specifically named 'Ethereum'\n   filtered_df_6 = df.filter(df[\"Name\"] == \"Ethereum\")\n   print(\"Filtered by Name 'Ethereum':\")\n   display(filtered_df_6)\n\n   # Filter 7: Filter by coins specifically named 'Litecoin'\n   filtered_df_7 = df.filter(df[\"Name\"] == \"Litecoin\")\n   print(\"Filtered by Name 'Litecoin':\")\n   display(filtered_df_7)\n   ```\n   \n4. **Save File and Download**\n\n_Processes cryptocurrency data to compute average price changes and the most recent prices, then writes the results to a CSV file:_\n\n  ```python\n  from pyspark.sql.functions import col, max, avg\nfrom pyspark.sql.window import Window\n\n# Get the most recent 'Creation Date' for each 'Name'\nlatest_creation_date_df = df.withColumn(\"Creation Date\", col(\"`Creation Date`\").cast(\"timestamp\"))\nlatest_creation_date_df = latest_creation_date_df.withColumn(\n    \"max_creation_date\", max(\"Creation Date\").over(Window.partitionBy(\"Name\"))\n)\n\n# Filter rows where 'Creation Date' is the most recent\nfiltered_df = latest_creation_date_df.filter(col(\"Creation Date\") == col(\"max_creation_date\")).drop(\"max_creation_date\")\n\n# Group by 'Name' and calculate the average of '24h Change', and keep the most recent 'Current Price'\nagg_df = filtered_df.groupBy(\"Name\").agg(\n    avg(\"`24h Change`\").alias(\"AVG price change\"),  # Temporary alias\n    max(\"`Current Price`\")  # Keep the most recent 'Current Price'\n)\n\n# Rename the 'AVG price change' column permanently\nresult_df = agg_df.withColumnRenamed(\"AVG price change\", \"Average Price Change\")\n\n# Show the resulting DataFrame\ndisplay(result_df)\n\n# Write the result to a CSV file in /dbfs/tmp/result_df with semicolon as the delimiter\noutput_path = \"/dbfs/tmp/result_df\"\nresult_df.coalesce(1).write.mode(\"overwrite\").option(\"header\", \"true\").option(\"delimiter\", \";\").csv(output_path)\n\n# Find the generated CSV file in the directory\ncsv_files = dbutils.fs.ls(output_path)\ncsv_file_path = \"\"\nfor file_info in csv_files:\n    if file_info.name.endswith(\".csv\"):\n        csv_file_path = file_info.path\n        break\n\nif csv_file_path:\n    # Move the file to the /FileStore directory\n    dbutils.fs.mv(csv_file_path, \"dbfs:/FileStore/result_df.csv\")\n    # Read the contents of the CSV file and print it\n    file_content = dbutils.fs.head(\"dbfs:/FileStore/result_df.csv\")\n    result_df.show()\n    print(\"File content:\\n\")\n    print(file_content)\n    print(f\"\\nNumber of rows in result_df: {result_df.count()}\")\n    print(\"The prices are in USD.\")\n    print(\"The 'Average Price Change' is the result of averaging all the daily price changes.\")\nelse:\n    print(\"No CSV file was generated.\")\n  ```\n\n5. **Check Things**\n\n_This one is really helpful and will serve you to delete the 'result_df.csv' file each time you need a new one:_\n\n```python\n# '/dbfs/' section:\n\n#dbutils.fs.ls(\"dbfs/tmp/\") # Checks the 'tmp' directory to see if the 'result_df' directory was successfully created\n\n#dbutils.fs.ls(\"/dbfs/tmp/result_df/\") # Checks if the files on 'result_df' were successfully created (The one you want is the 'part-00000-tid-\u003cid\u003e.csv')\n\n#dbutils.fs.head(\"/dbfs/tmp/result_df/part-00000-tid-???.csv\") # Check the content of the file, you need to fill the id\n\n#dbutils.fs.rm(\"/dbfs/tmp/result_df/\", recurse=True) # Deletes the 'result_df' directory (don't worry because the code above creates it again)\n\n\n# 'dbfs:/' section:   \u003c-- This is the one you need to use\n\n#dbutils.fs.rm(\"dbfs:/FileStore/result_df.csv\", recurse=True) # Deletes the 'result_df.csv' file\n\n#dbutils.fs.ls(\"dbfs:/FileStore/\") # Checks the 'FileStore' directory to see if the 'result_df.csv' file was successfully created\n```\n\n\u003cp align=\"center\"\u003eJust uncomment what you want to use\u003c/p\u003e\n\n---\n\n## \u003cp align=\"center\"\u003eDownloading CSV File\u003c/p\u003e\n\nTo download the resulting CSV file from Databricks, follow these steps for using the **Databricks CLI**:\n\n1. _Open_ **IntelliJ** _in the root folder of your project and create a new Python virtual environment by running the following command:_\n    ```bash\n    python -m venv databricks-env\n    ```\n\n2. _Activate the virtual environment:_\n    ```bash\n    databricks-env\\Scripts\\activate\n    ```\n\n3. _Install the Databricks CLI package using `pip`:_\n    ```bash\n    pip install databricks-cli\n    ```\n\n4. _Configure the CLI by setting your Databricks token:_\n    ```bash\n    databricks configure --token\n    ```\n    - Enter your domain when prompted: `https://adb-3022457162513861.1.azuredatabricks.net/` (Example)\n    - Enter the token generated from your Databricks account.\n    \n      * _To generate a token, go to your Databricks profile:  \n      Navigate to **User Settings \u003e Developer \u003e Access Tokens** and generate a new token._\n\n      * _To get your domain simply see it on your web browser's current URL_\n\n5. _Check if the output file was saved in the temporary folder on DBFS:_\n    ```bash\n    databricks fs ls dbfs:/FileStore/\n    ```\n\n6. _Download the file to your local_ `data` _directory (where all the files generated by_ `CryptoETL.java` _are stored):_\n    ```bash\n    databricks fs cp dbfs:/FileStore/result_df.csv ./data/result_df.csv\n    ```\n\n7. _To deactivate the virtual environment, use:_\n    ```bash\n    deactivate\n    ```\n\n---\n\n## \u003cp align=\"center\"\u003eTroubleshooting\u003c/p\u003e\n\n1. **Downloading the CSV file from Databricks ⚠️**\n    \n   _I've tried many solutions to download directly from the jupyter notebook but at the end not even one worked. I've had to search on the Databricks Forum and found people with the same issues and they solved it using the Databricks CLI._\n\n\n2. **Cluster Node Availability Issues ⛔** \n    \n   _If you encounter node availability issues when setting up your Databricks cluster, refer to the Azure documentation and run the necessary commands to list available nodes._\n\n\t _If you want to see the documentation, see this_ [link](https://learn.microsoft.com/en-us/azure/azure-resource-manager/troubleshooting/error-sku-not-available?tabs=azure-cli#code-try-4) \n\n3. **Issues with Excel Separators ❌**\n\n   _It can happen that when you download the_ `result_df.csv file` _it comes with commas or periods in places where they didn't appear on the execution of the CSV file, that happens when the separators setting on Excel is different than the one that was configured on the code enviroment_\n\n   \t_To fix it you can follow this instructions: In Excel, for example, you can adjust the decimal and thousands separators in:_\n`File \u003e Options \u003e Advanced \u003e Editing options \u003e Use system separators` _(uncheck this and set the period as the decimal separator)._ \n\n---\n\n## \u003cp align=\"center\"\u003eConclusion\u003c/p\u003e\n\n_CryptoETL successfully extracts cryptocurrency data, processes it, and stores it in Azure Blob Storage and Databricks. Additionally, the project uses the Databricks CLI to download the processed CSV file locally for further analysis or reporting. Future work may involve automating pipelines for continuous data processing and analysis._\n\n---\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fraul-dc%2Fcrypto-etl","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fraul-dc%2Fcrypto-etl","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fraul-dc%2Fcrypto-etl/lists"}