{"id":21126118,"url":"https://github.com/shigebeyond/sparkboot","last_synced_at":"2025-03-14T11:42:21.550Z","repository":{"id":208376965,"uuid":"718071092","full_name":"shigebeyond/SparkBoot","owner":"shigebeyond","description":null,"archived":false,"fork":false,"pushed_at":"2024-03-13T07:18:16.000Z","size":113,"stargazers_count":0,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"master","last_synced_at":"2024-04-14T06:47:42.110Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/shigebeyond.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-11-13T10:12:00.000Z","updated_at":"2023-11-13T10:12:21.000Z","dependencies_parsed_at":"2024-11-20T06:15:09.667Z","dependency_job_id":null,"html_url":"https://github.com/shigebeyond/SparkBoot","commit_stats":null,"previous_names":["shigebeyond/sparkboot"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/shigebeyond%2FSparkBoot","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/shigebeyond%2FSparkBoot/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/shigebeyond%2FSparkBoot/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/shigebeyond%2FSparkBoot/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/shigebeyond","download_url":"https://codeload.github.com/shigebeyond/SparkBoot/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":243573168,"owners_count":20312879,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-20T04:39:12.989Z","updated_at":"2025-03-14T11:42:21.521Z","avatar_url":"https://github.com/shigebeyond.png","language":"Python","readme":"[GitHub](https://github.com/shigebeyond/SparkBoot) | [Gitee](https://gitee.com/shigebeyond/SparkBoot)\n\n# SparkBoot - yaml驱动Spark开发\n\n## 一、概述\nSpark太复杂了，特别是涉及到scala与python开发，学习与使用成本很高，因此创作了SparkBoot工具，开发人员仅编写yaml与sql即可实现复杂的Spark编程，为其屏蔽了底层开开发细节，减轻了开发难度，让其更专注于大数据ETL与分析的逻辑；\n\n框架通过编写简单的yaml与sql, 就可以执行一系列复杂的spark操作步骤, 如读数据/写数据/sql查询/打印变量等，压根不用写scala或python代码，极大的简化了伙伴Spark编程的工作量与工作难度，大幅提高人效；\n\n框架通过提供类似python`for`/`if`/`break`语义的步骤动作，赋予伙伴极大的开发能力与灵活性，能适用于广泛的应用场景。\n\n框架提供`include`机制，用来加载并执行其他的步骤yaml，一方面是功能解耦，方便分工，一方面是功能复用，提高效率与质量，从而推进脚本整体的工程化。\n\n## 二、特性\n1. 底层基于 pyspark 库来实现 \n2. 支持通过yaml来配置执行的步骤，简化了生成代码的开发:\n每个步骤可以有多个动作，但单个步骤中动作名不能相同（yaml语法要求）;\n动作代表spark上的一种操作，如read_csv/run_sql/run_python等等;\n3. 支持类似python`for`/`if`/`break`语义的步骤动作，灵活适应各种场景\n4. 支持`include`引用其他的yaml配置文件，以便解耦与复用;\n5. 支持用`schedule`动作来实现定时处理.\n6. sql优先，用sql来实现数据转换是相对简单的，同时相较于找一个会scala的开发伙计，找一个会sql的人更为容易。\n\n## 三、同类yaml驱动框架\n* [HttpBoot](https://github.com/shigebeyond/HttpBoot) yaml驱动http接口自动化测试+性能测试\n* [SeleniumBoot](https://github.com/shigebeyond/SeleniumBoot) yaml驱动Selenium测试\n* [AppiumBoot](https://github.com/shigebeyond/AppiumBoot) yaml驱动Appium测试\n* [MiniumBoot](https://github.com/shigebeyond/MiniumBoot) yaml驱动Minium测试\n* [ExcelBoot](https://github.com/shigebeyond/ExcelBoot) yaml驱动Excel生成\n* [MonitorBoot](https://github.com/shigebeyond/MonitorBoot) yaml驱动linux系统监控与jvm性能监控与告警\n* [SparkBoot](https://github.com/shigebeyond/SparkBoot) yaml驱动Spark开发\n* [K8sBoot](https://github.com/shigebeyond/K8sBoot) 简化k8s资源定义文件\n* [ArgoFlowBoot](https://github.com/shigebeyond/ArgoFlowBoot) 简化Argo Workflows工作流定义文件\n\n## 四、todo\n1. 支持更多的动作\n\n## 五、安装\n```\npip3 install SparkBoot\n```\n\n安装后会生成命令`SparkBoot`;\n\n注： 对于深度deepin-linux系统，生成的命令放在目录`~/.local/bin`，建议将该目录添加到环境变量`PATH`中，如\n```\nexport PATH=\"$PATH:/home/shi/.local/bin\"\n```\n\n## 六、使用\n### 1 本地执行\n```\n# 以local模式来执行 步骤配置文件中定义的spark作业\nSparkBoot 步骤配置文件.yml\n\n# 生成spark作业的相关文件，用于cluster/yarn模式中给spark-submit提交作业\n# 生成文件包含: 1 submit.sh spark-submit的提交作业的命令 2 run.py python入口文件 3 步骤配置文件\n# 提交作业命令如: spark-submit --master spark://192.168.62.209:7077 run.py 步骤配置文件\nSparkBoot 步骤配置文件.yml  -o 作业文件的生成目录\n```\n\n如执行 `SparkBoot example/test.yml`，输出如下\n```\n2023-09-25 12:34:22,578 - ThreadPoolExecutor-0_0\t- boot - DEBUG - handle action: set_vars={'outdir': '../data'}\n2023-09-25 12:34:22,578 - ThreadPoolExecutor-0_0\t- boot - DEBUG - handle action: else=[{'init_session': {'app': 'test'}, 'set_vars': {'outdir': '/output'}}]\n2023-09-25 12:34:22,578 - ThreadPoolExecutor-0_0\t- boot - DEBUG - handle action: read_jdbc={'user': {'url': 'jdbc:mysql://192.168.62.209:3306/test', 'table': 'user', 'properties': {'user': 'root', 'password': 'root', 'driver': 'com.mysql.jdbc.Driver'}}}\n+---+--------+--------+------+---+------+\n| id|username|password|  name|age|avatar|\n+---+--------+--------+------+---+------+\n|  1|        |        | shi-1|  1|  null|\n|  2|        |        | shi-2|  2|  null|\n|  3|        |        | shi-3|  3|  null|\n+---+--------+--------+------+---+------+\nonly showing top 20 rows\n\n2023-09-25 12:34:27,231 - ThreadPoolExecutor-0_0\t- boot - DEBUG - handle action: write_csv={'user': {'path': '$outdir/user.csv', 'mode': 'overwrite'}}\n2023-09-25 12:34:27,783 - ThreadPoolExecutor-0_0\t- boot - DEBUG - handle action: read_csv={'user2': {'path': '$outdir/user.csv'}}\n......\n```\n命令会自动执行`test.yaml`文件中定义的spark任务\n\n### 2 集群中执行\n1. 生成作业文件\n```sh\nSparkBoot udf-test.yml -u udf-test.py -o gen\n```\n生成文件如下\n```\nshi@shi-PC:[~/code/python/SparkBoot/example]: tree gen\ngen\n├── run.py -- python入口文件\n├── submit.sh -- 提交命令，其中master要根据实际情况调整\n├── udf-test.py -- udf定义文件\n└── udf-test.yml -- 步骤配置文件\n```\n\n2. 将生成文件目录上传到spark master节点\n3. 执行 `submit.sh` 来提交作业\n\n## 七、步骤配置文件及demo\n用于指定多个步骤, 示例见源码 [example](example) 目录下的文件;\n\n顶级的元素是步骤;\n\n每个步骤里有多个动作(如read_csv/run_sql/run_python)，如果动作有重名，就另外新开一个步骤写动作，这是由yaml语法限制导致的，但不影响步骤执行。\n\n简单贴出3个demo\n1. 基本api测试: 详见 [example/test.yml](example/test.yml)\n1. 简单的单词统计: 详见 [example/word-count.yml](example/word-count.yml)\n2. 复杂的订单统计: 详见 [example/order-stat.yml](example/order-stat.yml)\n\n## 八、配置详解\n支持通过yaml来配置执行的步骤;\n\n每个步骤可以有多个动作，但单个步骤中动作名不能相同（yaml语法要求）;\n\n动作代表spark上的一种操作，如read_csv/run_sql/run_python等等;\n\n下面详细介绍每个动作:\n\n### 1 初始化session的动作\n1. init_session: 初始化spark session\n```yaml\n- init_session:\n    app: test\n    #master: local[*] # master: 对local仅在本地调试时使用，如果是在集群中运行，则需要删掉本行，并在spark-submit命令中指定master\n    log_level: error # 日志级别\n```\n\n### 2 读批数据的动作，对应ETL中的extract\n2. read_csv: 读csv数据\n```yaml\nread_csv:\n  # key是表名, value是csv文件路径\n  user: /data/input/user.csv\n```\n\n3. read_json: 读json数据\n```yaml\nread_json:\n  # key是表名, value是json文件路径\n  user: /data/input/user.json\n  order: http://127.0.0.1:8080/minimini.json # 对远程文件, 先下载到本地\n```\n\n4. read_orc: 读orc数据\n```yaml\nread_orc:\n  # key是表名, value是orc文件路径\n  user: /data/input/user.orc\n```\n\n5. read_parquet: 读parquet数据\n```yaml\nread_parquet:\n  # key是表名, value是parquet文件路径\n  user: /data/input/user.parquet\n```\n\n6. read_text: 读文本数据\n```yaml\nread_text:\n  # key是表名, value是文本文件路径\n  lines: /data/input/words.txt\n```\n\n7. read_jdbc: 读jdbc数据\n```yaml\nread_jdbc:\n    # key是表名, value是jdbc连接配置\n    user:\n      url: jdbc:mysql://192.168.62.209:3306/test\n      table: user # 表\n      # table: (SELECT * FROM user WHERE id \u003c= 10) AS tmp # 查询sql\n      properties:\n        user: root\n        password: root\n        driver: com.mysql.jdbc.Driver # 需要提前复制好mysql驱动jar，参考pyspark.md\n```\n\n8. read_table: 读表数据\n```yaml\n# 接收字典参数\nread_table:\n    # key是新表名, value是旧表名\n    user2: user\n# 接收数组参数\nread_table:\n  - user\n```\n\n### 3 读流数据的动作，对应ETL中的extract\n9. reads_rate: 读模拟流数据\n```yaml\nreads_rate:\n    # key是表名, value是参数\n    user:\n      rowsPerSecond: 10 # 每秒产生10行\n```\n\n10. reads_socket: 读socket流数据\n```yaml\nreads_socket:\n    # key是表名, value是socket server的ip端口\n    user: localhost:9999\n```\n\n11. reads_kafka: 读kafka流数据\n```yaml\nreads_kafka:\n    # key是表名, value是kafka brokers+topic\n    user:\n      brokers: localhost:9092 # 多个用逗号分割\n      topic: test\n```\n\n12. reads_csv: 读csv流数据\n```yaml\nreads_csv:\n  # key是表名, value是csv文件路径\n  user: /data/input/user.csv\n```\n\n13. reads_json: 读json流数据\n```yaml\nreads_json:\n  # key是表名, value是json文件路径\n  user: /data/input/user.json\n```\n\n14. reads_orc: 读orc流数据\n```yaml\nreads_orc:\n  # key是表名, value是orc文件路径\n  user: /data/input/user.orc\n```\n\n15. reads_parquet: 读parquet流数据\n```yaml\nreads_parquet:\n  # key是表名, value是parquet文件路径\n  user: /data/input/user.parquet\n```\n\n16. reads_text: 读文本流数据\n```yaml\nreads_text:\n  # key是表名, value是文本文件路径\n  lines: /data/input/words.txt\n```\n\n### 4 执行转换处理的动作，对应ETL中的transform\n17. run_sql: 执行sql\n```yaml\n- run_sql:\n    # key是表名, value是查询sql\n    words: select explode(split(value,\" \")) as word from lines\n    word_count: select word, count(1) as cnt from words group by word\n```\n\n18. run_python: 执行python脚本\n```yaml\n- run_python:\n    # key是表名, value是python脚本\n    test: |\n          user.select(\"id\", \"name\", \"age\").filter(\"id \u003c= 10\")\n```\n参考[python-test.yml](example/python-test.yml)，其中`user`是之前代码定义过的表名对应的变量，代表一个spark DataFrame对象\n\n### 5 写批数据的动作，对应ETL中的load\n19. write_console: 将数据写到控制台\n```yaml\nwrite_console:\n  # key是表名, value是参数\n  user:\n    mode: complete # append/update/complete\n```\n\n20. write_csv: 写csv数据\n```yaml\nwrite_csv:\n    # key是表名, value是csv文件路径\n    user: /data/output/user.csv\n# 或\nwrite_csv:\n    user:\n      path: /data/output/user.csv\n      mode: overwrite # 模式：append/overwrite/ignore\n      #compression: none # 不压缩\n```\n\n21. write_json: 写json数据\n```yaml\nwrite_json:\n    # key是表名, value是json文件路径\n    user: /data/output/user.json\n```\n\n22. write_orc: 写orc数据\n```yaml\nwrite_orc:\n    # key是表名, value是orc文件路径\n    user: /data/output/user.orc\n```\n\n23. write_parquet: 写parquet数据\n```yaml\nwrite_parquet:\n    # key是表名, value是parquet文件路径\n    user: /data/output/user.parquet\n```\n\n24. write_text: 写文本数据\n```yaml\nwrite_text:\n    # key是表名, value是文本文件路径\n    user: /data/output/user.txt\n```\n\n25. write_jdbc: 写jdbc数据\n```yaml\nwrite_jdbc:\n    # key是表名, value是jdbc连接配置\n    user:\n      url: jdbc:mysql://192.168.62.209:3306/test\n      table: user\n      properties:\n        user: root\n        password: root\n        driver: com.mysql.jdbc.Driver # 需要提前复制好mysql驱动jar，参考pyspark.md\n```\n\n### 6 写流数据的动作，对应ETL中的load\n26. writes_console: 将流数据写到控制台\n```yaml\nwrites_console:\n  # key是表名, value是参数\n  user:\n    checkpointLocation: path/to/checkpoint/dir\n    outputMode: complete # append/update/complete\n    #trigger: 5 # 定时写的时间间隔，接收int(如5表示5秒)或str(如5 seconds)\n```\n\n27. writes_mem: 将流数据写到内存表中\n```yaml\nwrites_mem:\n  # key是表名, value是参数\n  user:\n    checkpointLocation: path/to/checkpoint/dir\n    outputMode: complete # append/update/complete\n    #trigger: 5 # 定时写的时间间隔，接收int(如5表示5秒)或str(如5 seconds)\n    queryName: tmp_user # 内存表名\n```\n\n28. writes_kafka: 写kafka流数据\n```yaml\nwrites_kafka:\n  # key是表名, value是kafka brokers+topic\n  user:\n    brokers: localhost:9092 # 多个用逗号分割\n    topic: test\n    checkpointLocation: path/to/checkpoint/dir\n    outputMode: complete # append/update/complete\n    #trigger: 5 # 定时写的时间间隔，接收int(如5表示5秒)或str(如5 seconds)\n```\n\n29. writes_csv: 写csv数据\n```yaml\nwrites_csv:\n    # key是表名, value是文本文件路径\n    user:\n      path: /data/output/user.csv\n      mode: overwrite # 模式：append/overwrite/ignore\n      #compression: none # 不压缩\n      checkpointLocation: path/to/checkpoint/dir\n      outputMode: complete # append/update/complete\n      #trigger: 5 # 定时写的时间间隔，接收int(如5表示5秒)或str(如5 seconds)\n```\n\n30. writes_json: 写json数据\n```yaml\nwrites_json:\n    # key是表名, value是json文件路径\n    user: \n      path: /data/output/user.json\n      checkpointLocation: path/to/checkpoint/dir\n      outputMode: complete # append/update/complete\n      #trigger: 5 # 定时写的时间间隔，接收int(如5表示5秒)或str(如5 seconds)\n```\n\n31. writes_orc: 写orc数据\n```yaml\nwrites_orc:\n    # key是表名, value是orc文件路径\n    user: \n      path: /data/output/user.orc\n      checkpointLocation: path/to/checkpoint/dir\n      outputMode: complete # append/update/complete\n      #trigger: 5 # 定时写的时间间隔，接收int(如5表示5秒)或str(如5 seconds)\n```\n\n32. writes_parquet: 写parquet数据\n```yaml\nwrites_parquet:\n    # key是表名, value是parquet文件路径\n    user: \n      path: /data/output/user.parquet\n      checkpointLocation: path/to/checkpoint/dir\n      outputMode: complete # append/update/complete\n      #trigger: 5 # 定时写的时间间隔，接收int(如5表示5秒)或str(如5 seconds)\n```\n\n33. writes_text: 写文本数据\n```yaml\nwrites_text:\n  # key是表名, value是文本文件路径\n    user: \n      path: /data/output/user.txt\n      checkpointLocation: path/to/checkpoint/dir\n      outputMode: complete # append/update/complete\n      #trigger: 5 # 定时写的时间间隔，接收int(如5表示5秒)或str(如5 seconds)\n```\n\n### 7 表相关动作\n34. list_tables: 列出所有表\n```yaml\nlist_tables: \n```\n\n35. drop_table:  删除单个表\n```yaml\ndrop_table: user # 删除表user\n```\n\n### 8 缓存相关动作\n36. cache: 对子动作中产生的表进行缓存 \n```yaml\n- cache:\n  - run_sql:\n      my_order: select storeProvince,storeID,receivable,dateTS,payType from order where storeProvince != 'null' and receivable \u003e 1000 # 读源文件 minimini.json\n```\n\n37. cache: 对子动作中产生的表进行存储\n```yaml\n- persist:\n  - run_sql:\n      top3_province_order: select my_order.* from my_order join top3_provinces where my_order.storeProvince = top3_provinces.storeProvince\n```\n\n### 9 其他动作\n38. print: 打印, 支持输出变量/函数;\n```yaml\n# 调试打印\nprint: \"总申请数=${dyn_data.total_apply}, 剩余份数=${dyn_data.quantity_remain}\"\n```\n\n39. for: 循环;\n    for动作下包含一系列子步骤，表示循环执行这系列子步骤；变量`for_i`记录是第几次迭代（从1开始）,变量`for_v`记录是每次迭代的元素值（仅当是list类型的变量迭代时有效）\n```yaml\n# 循环3次\nfor(3) :\n  # 每次迭代要执行的子步骤\n  - switch_sheet: test\n\n# 循环list类型的变量urls\nfor(urls) :\n  # 每次迭代要执行的子步骤\n  - switch_sheet: test\n\n# 无限循环，直到遇到跳出动作\n# 有变量for_i记录是第几次迭代（从1开始）\nfor:\n  # 每次迭代要执行的子步骤\n  - break_if: for_i\u003e2 # 满足条件则跳出循环\n    switch_sheet: test\n```\n\n40. once: 只执行一次，等价于 `for(1)`;\n    once 结合 moveon_if，可以模拟 python 的 `if` 语法效果\n```yaml\nonce:\n  # 每次迭代要执行的子步骤\n  - moveon_if: for_i\u003c=2 # 满足条件则往下走，否则跳出循环\n    switch_sheet: test\n```\n\n41. break_if: 满足条件则跳出循环;\n    只能定义在for/once循环的子步骤中\n```yaml\nbreak_if: for_i\u003e2 # 条件表达式，python语法\n```\n\n42. moveon_if: 满足条件则往下走，否则跳出循环;\n    只能定义在for/once循环的子步骤中\n```yaml\nmoveon_if: for_i\u003c=2 # 条件表达式，python语法\n```\n\n43. if/else: 满足条件则执行if分支，否则执行else分支\n```yaml\n- set_vars:\n    txt: '进入首页'\n- if(txt=='进入首页'): # 括号中包含的是布尔表达式，如果表达式结果为true，则执行if动作下的子步骤，否则执行else动作下的子步骤\n    - print: '----- 执行if -----'\n  else:\n    - print: '----- 执行else -----'\n```\n\n44. include: 包含其他步骤文件，如记录公共的步骤，或记录配置数据(如用户名密码);\n```yaml\ninclude: part-common.yml\n```\n\n45. set_vars: 设置变量;\n```yaml\nset_vars:\n  name: shi\n  password: 123456\n  birthday: 5-27\n```\n\n46. print_vars: 打印所有变量;\n```yaml\nprint_vars:\n```\n\n47. schedule: 定时处理，就是每隔指定秒数就执行下子步骤，如定时将流处理结果输出\n```yaml\n# 定时处理\n- schedule(5): # 每隔5秒 \n    # 执行子步骤\n    - print: '每隔5s触发: ${now()}'\n```\n\n## 九、UDF 用户定义函数\n1. 定义 UDF: [udf-test.py](example/udf-test.py)\n```python\nfrom pyspark.sql.functions import udf\nfrom pyspark.sql.types import *\n\n@udf(returnType=DoubleType())\ndef add(m, n):\n    return float(m) + float(n)\n\n@udf(returnType=DoubleType())\ndef add_one(a):\n    return float(a) + 1.0\n```\n\n2. 定义步骤文件: [udf-test.yml](example/udf-test.yml)\n```yaml\n- debug: true # 遇到df就show()\n# 1 初始化spark session\n- init_session:\n    app: test\n    #master: local[*]\n    log_level: error # 日志级别\n# 2 读mysql\n- read_jdbc:\n    user:\n      url: jdbc:mysql://192.168.62.209:3306/test\n      table: user\n      properties:\n        user: root\n        password: root\n        driver: com.mysql.jdbc.Driver # 需要提前复制好mysql驱动jar，参考pyspark.md\n# 3 查sql: select udf\n- run_sql:\n    test: select id,add_one(id),add(id,2) from user\n```\n\n3. 命令行执行，需用`-u`来指定UDF所在的python文件\n```sh\nSparkBoot udf-test.yml -u udf-test.py\n```\n执行结果如下\n\n![](img/run-udf.png)","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fshigebeyond%2Fsparkboot","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fshigebeyond%2Fsparkboot","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fshigebeyond%2Fsparkboot/lists"}