{"id":20911516,"url":"https://github.com/anant/example-cassandra-spark-sql","last_synced_at":"2025-12-26T21:54:46.517Z","repository":{"id":112529280,"uuid":"366194462","full_name":"Anant/example-cassandra-spark-sql","owner":"Anant","description":"Cassandra Data Operations with Spark SQL","archived":false,"fork":false,"pushed_at":"2021-05-11T20:33:05.000Z","size":5383,"stargazers_count":1,"open_issues_count":0,"forks_count":0,"subscribers_count":2,"default_branch":"main","last_synced_at":"2025-01-19T15:38:55.595Z","etag":null,"topics":["cassandra","data-operations","docker","etl","spark","spark-sql"],"latest_commit_sha":null,"homepage":"","language":null,"has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Anant.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2021-05-10T22:48:04.000Z","updated_at":"2022-08-03T19:16:19.000Z","dependencies_parsed_at":"2023-05-15T16:45:21.409Z","dependency_job_id":null,"html_url":"https://github.com/Anant/example-cassandra-spark-sql","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Anant%2Fexample-cassandra-spark-sql","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Anant%2Fexample-cassandra-spark-sql/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Anant%2Fexample-cassandra-spark-sql/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Anant%2Fexample-cassandra-spark-sql/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Anant","download_url":"https://codeload.github.com/Anant/example-cassandra-spark-sql/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":243302009,"owners_count":20269439,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cassandra","data-operations","docker","etl","spark","spark-sql"],"created_at":"2024-11-18T14:22:13.844Z","updated_at":"2025-12-26T21:54:46.470Z","avatar_url":"https://github.com/Anant.png","language":null,"readme":"# Apache Spark SQL for Cassandra Data Operations\nIn this walkthrough, we will cover how we can use Spark SQL to do Cassandra Data Operations. We will be using Spark Shell instead of the Spark SQL Shell due to the amount logs that come with a command in the Spark SQL Shell. We will also be using the [Catalog](https://github.com/datastax/spark-cassandra-connector/blob/6a213676caf3323333753752600b5551a69845d5/doc/1_connecting.md#configuring-catalogs-to-cassandra) method from [DataStax's Spark Cassandra Connector](https://github.com/datastax/spark-cassandra-connector). \n\n## Prerequisites\n- Docker\n- Spark 3.0.X\n\n## 1. Setup Dockerized Apache Cassandra\n\n### 1.1 - Clone repo and cd into it\n```bash\ngit clone https://github.com/Anant/example-cassandra-spark-sql.git\n```\n\n```bash\ncd example-cassandra-spark-sql\n```\n\n### 1.2 - Start Apache Cassandra Container and Mount Directory\n```bash\ndocker run --name cassandra -p 9042:9042 -d -v \"$(pwd)\":/example-cassandra-spark-sql cassandra:latest\n```\n\n### 1.3 - Run `cqlsh`\n```bash\ndocker exec -it cassandra cqlsh\n```\n\n### 1.4 - Run `setup.cql`\n```bash\nsource '/example-cassandra-spark-sql/setup.cql'\n```\n\n## 2. Start Spark Shell\n\n### 2.1 - Navigate to Spark directory and start in standalone cluster mode\n```bash\n./sbin/start-master.sh\n```\n\n### 2.2 - Start worker and point it at the master\nYou can find your Spark master URL at `localhost:8080`\n```bash\n./sbin/start-slave.sh \u003cmaster-url\u003e\n```\n\n### 2.3 - Start Spark Shell\n```bash\n./bin/spark-shell --packages com.datastax.spark:spark-cassandra-connector_2.12:3.0.0 \\\n--master \u003cspark-master-url\u003e \\\n--conf spark.cassandra.connection.host=127.0.0.1 \\\n--conf spark.cassandra.connection.port=9042 \\\n--conf spark.sql.extensions=com.datastax.spark.connector.CassandraSparkExtensions \\\n--conf spark.sql.catalog.cassandra=com.datastax.spark.connector.datasource.CassandraCatalog\n```\n\n## 3. Basic Cassandra Schema Commands\nWe will cover some basic Cassandra Schema commands we can do with Spark SQL. More can this can be found [here](https://github.com/datastax/spark-cassandra-connector/blob/42937e1ed01dd5aefb37fea38dbafc49ed44250e/doc/14_data_frames.md#supported-schema-commands)\n\n### 3.1 - Create Table\n```bash\nspark.sql(\"CREATE TABLE cassandra.demo.testTable (key_1 Int, key_2 Int, key_3 Int, cc1 STRING, cc2 String, cc3 String, value String) USING cassandra PARTITIONED BY (key_1, key_2, key_3) TBLPROPERTIES (clustering_key='cc1.asc, cc2.desc, cc3.asc', compaction='{class=SizeTieredCompactionStrategy,bucket_high=1001}')\")\n```\n\n### 3.2 - Alter Table\n```bash\nspark.sql(\"ALTER TABLE cassandra.demo.testTable ADD COLUMNS (newCol INT)\")\n```\n```bash\nspark.sql(\"describe table cassandra.demo.testTable\").show\n```\n\n### 3.3 - Drop Table\n```bash\nspark.sql(\"DROP TABLE cassandra.demo.testTable\")\n```\n```bash\nspark.sql(\"SHOW TABLES from cassandra.demo\").show\n```\n\n## 4. Basic Cassandra Data Operations with Spark SQL (Cassandra to Cassandra)\n\n### 4.1 - Read\nPerform a basic read\n```bash\nspark.sql(\"SELECT * from cassandra.demo.previous_employees_by_job_title\").show\n```\n\n### 4.2 - Write\nWrite data to a table from another table and use SQL functions\n```bash\nspark.sql(\"INSERT INTO cassandra.demo.days_worked_by_previous_employees_by_job_title SELECT job_title, employee_id, employee_name, abs(datediff(last_day, first_day)) as number_of_days_worked from cassandra.demo.previous_employees_by_job_title\")\n```\n\n### 4.3 - Joins\nJoin data from two tables together\n```bash\nspark.sql(\"\"\"\nSELECT cassandra.demo.previous_employees_by_job_title.job_title, cassandra.demo.previous_employees_by_job_title.employee_name, cassandra.demo.previous_employees_by_job_title.first_day, cassandra.demo.previous_employees_by_job_title.last_day, cassandra.demo.days_worked_by_previous_employees_by_job_title.number_of_days_worked \nFROM cassandra.demo.previous_employees_by_job_title \nLEFT JOIN cassandra.demo.days_worked_by_previous_employees_by_job_title ON cassandra.demo.previous_employees_by_job_title.employee_id=cassandra.demo.days_worked_by_previous_employees_by_job_title.employee_id \nWHERE cassandra.demo.days_worked_by_previous_employees_by_job_title.job_title='Dentist'\n\"\"\").show\n```\n\n## 5. Truncate tables with `CQLSH`\n\n```bash\nTRUNCATE TABLE demo.previous_employees_by_job_title ; \n```\n```bash\nTRUNCATE TABLE demo.days_worked_by_previous_employees_by_job_title ; \n```\n\n## 6. Basic Cassandra Data Operations with Spark SQL (Source File to Cassandra)\n\n### 6.1 - Restart Spark Shell\n```bash\n./bin/spark-shell --packages com.datastax.spark:spark-cassandra-connector_2.12:3.0.0 \\\n--master spark://arpans-mbp.lan:7077 \\\n--conf spark.cassandra.connection.host=127.0.0.1 \\\n--conf spark.cassandra.connection.port=9042 \\\n--conf spark.sql.extensions=com.datastax.spark.connector.CassandraSparkExtensions \\\n--conf spark.sql.catalog.cassandra=com.datastax.spark.connector.datasource.CassandraCatalog \\\n--files /path/to/example-cassandra-spark-sql/previous_employees_by_job_title.csv \n```\n\n### 6.2 - Load CSV data to df\n```bash\nval csv_df = spark.read.format(\"csv\").option(\"header\", \"true\").load(\"/path/to/example-cassandra-spark-sql/previous_employees_by_job_title.csv\")\n```\n\n### 6.3 - Create temp view to use Spark SQL\n```bash\ncsv_df.createOrReplaceTempView(\"source\")\n```\n\n### 6.4 - Write into Cassandra table using Spark SQL\n```bash\nspark.sql(\"INSERT INTO cassandra.demo.previous_employees_by_job_title SELECT * from source\")\n```\n\nAnd that will wrap up the walkthrough on using Spark SQL for basic Cassandra Data Operations. If you want to watch a live recording of the walkthrough, be sure to check out the YouTube video linked below!\n\n## Resources\n- [Accompanying Blog]()\n- [Accompanying YouTube]()\n- https://github.com/datastax/spark-cassandra-connector\n- https://docs.datastax.com/en/dse/6.8/dse-dev/datastax_enterprise/spark/sparkSqlSupportedSyntax.html\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fanant%2Fexample-cassandra-spark-sql","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fanant%2Fexample-cassandra-spark-sql","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fanant%2Fexample-cassandra-spark-sql/lists"}