{"id":305375,"url":"https://github.com/databricks/spark-xml","last_synced_at":"2025-03-25T00:38:53.175Z","repository":{"id":37396733,"uuid":"46900076","full_name":"databricks/spark-xml","owner":"databricks","description":"XML data source for Spark SQL and DataFrames","archived":false,"fork":false,"pushed_at":"2024-08-11T22:56:29.000Z","size":912,"stargazers_count":510,"open_issues_count":8,"forks_count":227,"subscribers_count":37,"default_branch":"master","last_synced_at":"2025-03-17T23:48:55.065Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Scala","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/databricks.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2015-11-26T02:46:09.000Z","updated_at":"2025-03-09T15:46:56.000Z","dependencies_parsed_at":"2023-02-04T06:31:06.043Z","dependency_job_id":"93bb63b3-b8a1-46a6-aa7e-986c280e3dbc","html_url":"https://github.com/databricks/spark-xml","commit_stats":{"total_commits":281,"total_committers":36,"mean_commits":7.805555555555555,"dds":0.5338078291814947,"last_synced_commit":"ddd1ef573a5318748763fafc974e4f7d8876fd6f"},"previous_names":[],"tags_count":23,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/databricks%2Fspark-xml","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/databricks%2Fspark-xml/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/databricks%2Fspark-xml/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/databricks%2Fspark-xml/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/databricks","download_url":"https://codeload.github.com/databricks/spark-xml/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":245377952,"owners_count":20605374,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-01-07T07:56:46.016Z","updated_at":"2025-03-25T00:38:53.144Z","avatar_url":"https://github.com/databricks.png","language":"Scala","readme":"# XML Data Source for Apache Spark 3.x\n\n- A library for parsing and querying XML data with [Apache Spark](https://spark.apache.org), for Spark SQL and DataFrames.\nThe structure and test tools are mostly copied from [CSV Data Source for Spark](https://github.com/databricks/spark-csv).\n\n- This package supports to process format-free XML files in a distributed way, unlike JSON datasource in Spark restricts in-line JSON format.\n\n- Compatible with Spark 3.0 and later with Scala 2.12, and also Spark 3.2 and later with Scala 2.12 or 2.13. Scala 2.11 and Spark 2 support ended with version 0.13.0.\n\n- Currently, `spark-xml` is planned to [become a part of Apache Spark 4.0](https://github.com/apache/spark/pull/41832). This library will remain in maintenance mode for Spark 3.x versions.\n\n## Linking\n\nYou can link against this library in your program at the following coordinates:\n\n```\ngroupId: com.databricks\nartifactId: spark-xml_2.12\nversion: 0.18.0\n```\n\n## Using with Spark shell\n\nThis package can be added to Spark using the `--packages` command line option. For example, to include it when starting the spark shell:\n\n```\n$SPARK_HOME/bin/spark-shell --packages com.databricks:spark-xml_2.12:0.18.0\n```\n\n## Features\n\nThis package allows reading XML files in local or distributed filesystem as [Spark DataFrames](https://spark.apache.org/docs/latest/sql-programming-guide.html).\n\nWhen reading files the API accepts several options:\n\n* `path`: Location of files. Similar to Spark can accept standard Hadoop globbing expressions.\n* `rowTag`: The row tag of your xml files to treat as a row. For example, in this xml `\u003cbooks\u003e \u003cbook\u003e\u003cbook\u003e ...\u003c/books\u003e`, the appropriate value would be `book`. Default is `ROW`.\n* `samplingRatio`: Sampling ratio for inferring schema (0.0 ~ 1). Default is 1. Possible types are `StructType`, `ArrayType`, `StringType`, `LongType`, `DoubleType`, `BooleanType`, `TimestampType` and `NullType`, unless user provides a schema for this.\n* `excludeAttribute` : Whether you want to exclude attributes in elements or not. Default is false.\n* `treatEmptyValuesAsNulls` : (DEPRECATED: use `nullValue` set to `\"\"`) Whether you want to treat whitespaces as a null value. Default is false\n* `mode`: The mode for dealing with corrupt records during parsing. Default is `PERMISSIVE`.\n  * `PERMISSIVE` :\n    * When it encounters a corrupted record, it sets all fields to `null` and puts the malformed string into a new field configured by `columnNameOfCorruptRecord`.\n    * When it encounters a field of the wrong datatype, it sets the offending field to `null`.\n  * `DROPMALFORMED` : ignores the whole corrupted records.\n  * `FAILFAST` : throws an exception when it meets corrupted records.\n* `inferSchema`: if `true`, attempts to infer an appropriate type for each resulting DataFrame column, like a boolean, numeric or date type. If `false`, all resulting columns are of string type. Default is `true`.\n* `columnNameOfCorruptRecord`: The name of new field where malformed strings are stored. Default is `_corrupt_record`.\n\n  Note: this field should be present in the dataframe schema if it is passed explicitly, like this:\n  ```python\n  schema = StructType([StructField(\"my_field\", TimestampType()), StructField(\"_corrupt_record\", StringType())])\n  spark.read.format(\"xml\").options(rowTag='item').schema(schema).load(\"file.xml\")\n  ```\n  If schema is infered, this field is added automatically.\n* `attributePrefix`: The prefix for attributes so that we can differentiate attributes and elements. This will be the prefix for field names. Default is `_`. Can be empty, but only for reading XML.\n* `valueTag`: The tag used for the value when there are attributes in the element having no child. Default is `_VALUE`.\n* `charset`: Defaults to 'UTF-8' but can be set to other valid charset names\n* `ignoreSurroundingSpaces`: Defines whether or not surrounding whitespaces from values being read should be skipped. Default is false.\n* `wildcardColName`: Name of a column existing in the provided schema which is interpreted as a 'wildcard'.\nIt must have type string or array of strings. It will match any XML child element that is not otherwise matched by the schema.\nThe XML of the child becomes the string value of the column. If an array, then all unmatched elements will be returned\nas an array of strings. As its name implies, it is meant to emulate XSD's `xs:any` type. Default is `xs_any`. New in 0.11.0.\n* `rowValidationXSDPath`: Path to an XSD file that is used to validate the XML for each row individually. Rows that fail to \nvalidate are treated like parse errors as above. The XSD does not otherwise affect the schema provided, or inferred. \nNote that if the same local path is not already also visible on the executors in the cluster, then the XSD and any others \nit depends on should be added to the Spark executors with \n[`SparkContext.addFile`](https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.SparkContext@addFile(path:String):Unit).\nIn this case, to use local XSD `/foo/bar.xsd`, call `addFile(\"/foo/bar.xsd\")` and pass just `\"bar.xsd\"` as `rowValidationXSDPath`.\n* `ignoreNamespace`: If true, namespaces prefixes on XML elements and attributes are ignored. Tags `\u003cabc:author\u003e` and `\u003cdef:author\u003e` would,\nfor example, be treated as if both are just `\u003cauthor\u003e`. Note that, at the moment, namespaces cannot be ignored on the\n`rowTag` element, only its children. Note that XML parsing is in general not namespace-aware even if `false`.\nDefaults to `false`. New in 0.11.0.\n* `timestampFormat`: Specifies an additional timestamp format that will be tried when parsing values as `TimestampType` \ncolumns. The format is specified as described in [DateTimeFormatter](https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html).\nDefaults to try several formats, including [ISO_INSTANT](https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html#ISO_INSTANT),\nincluding variations with offset timezones or no timezone (defaults to UTC). New in 0.12.0. As of 0.16.0, if a custom format pattern is used without a timezone, the default Spark timezone specified by `spark.sql.session.timeZone` will be used.\n* `timezone`: identifier of timezone to be used when reading timestamps without a timezone specified. New in 0.16.0.\n* `dateFormat`: Specifies an additional timestamp format that will be tried when parsing values as `DateType` \ncolumns. The format is specified as described in [DateTimeFormatter](https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html).\nDefaults to [ISO_DATE](https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html#ISO_DATE). New in 0.12.0.\n\nWhen writing files the API accepts several options:\n\n* `path`: Location to write files.\n* `rowTag`: The row tag of your xml files to treat as a row. For example, in `\u003cbooks\u003e \u003cbook\u003e\u003cbook\u003e ...\u003c/books\u003e`, the appropriate value would be `book`. Default is `ROW`.\n* `rootTag`: The root tag of your xml files to treat as the root. For example, in `\u003cbooks\u003e \u003cbook\u003e\u003cbook\u003e ...\u003c/books\u003e`, the appropriate value would be `books`. It can include basic attributes by specifying a value like `books foo=\"bar\"` (as of 0.11.0). Default is `ROWS`.\n* `declaration`: Content of XML declaration to write at the start of every output XML file, before the `rootTag`. For example, a value of `foo` causes `\u003c?xml foo?\u003e` to be written. Set to empty string to suppress. Defaults to `version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"`. New in 0.14.0.\n* `arrayElementName`: Name of XML element that encloses each element of an array-valued column when writing. Default is `item`. New in 0.16.0.\n* `nullValue`: The value to write `null` value. Default is string `null`. When this is `null`, it does not write attributes and elements for fields.\n* `attributePrefix`: The prefix for attributes so that we can differentiating attributes and elements. This will be the prefix for field names. Default is `_`. Cannot be empty for writing XML.\n* `valueTag`: The tag used for the value when there are attributes in the element having no child. Default is `_VALUE`.\n* `compression`: compression codec to use when saving to file. Should be the fully qualified name of a class implementing `org.apache.hadoop.io.compress.CompressionCodec` or one of case-insensitive shorten names (`bzip2`, `gzip`, `lz4`, and `snappy`). Defaults to no compression when a codec is not specified.\n* `timestampFormat`: Controls the format used to write `TimestampType` format columns.\nThe format is specified as described in [DateTimeFormatter](https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html).\nDefaults to [ISO_INSTANT](https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html#ISO_INSTANT). New in 0.12.0. As of 0.16.0, if a custom format pattern is used without a timezone, the default Spark timezone specified by `spark.sql.session.timeZone` will be used.\n* `timezone`: identifier of timezone to be used when writing timestamps without a timezone specified. New in 0.16.0.\n* `dateFormat`: Controls the format used to write `DateType` format columns.\nThe format is specified as described in [DateTimeFormatter](https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html).\nDefaults to [ISO_DATE](https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html#ISO_DATE). New in 0.12.0.\n\nCurrently it supports the shortened name usage. You can use just `xml` instead of `com.databricks.spark.xml`.\n\nNOTE: created files have no `.xml` extension.\n\n### XSD Support\n\nPer above, the XML for individual rows can be validated against an XSD using `rowValidationXSDPath`.\n\nThe utility `com.databricks.spark.xml.util.XSDToSchema` can be used to extract a Spark DataFrame\nschema from _some_ XSD files. It supports only simple, complex and sequence types, and only basic XSD functionality.\n\n```scala\nimport com.databricks.spark.xml.util.XSDToSchema\nimport java.nio.file.Paths\n\nval schema = XSDToSchema.read(Paths.get(\"/path/to/your.xsd\"))\nval df = spark.read.schema(schema)....xml(...)\n```\n\n### Parsing Nested XML\n\nAlthough primarily used to convert (portions of) large XML documents into a `DataFrame`,\n`spark-xml` can also parse XML in a string-valued column in an existing DataFrame with `from_xml`, in order to add\nit as a new column with parsed results as a struct.\n\n```scala\nimport com.databricks.spark.xml.functions.from_xml\nimport com.databricks.spark.xml.schema_of_xml\nimport spark.implicits._\nval df = ... /// DataFrame with XML in column 'payload' \nval payloadSchema = schema_of_xml(df.select(\"payload\").as[String])\nval parsed = df.withColumn(\"parsed\", from_xml($\"payload\", payloadSchema))\n```\n\n- This can convert arrays of strings containing XML to arrays of parsed structs. Use `schema_of_xml_array` instead\n- `com.databricks.spark.xml.from_xml_string` is an alternative that operates on a String directly instead of a column,\n  for use in UDFs\n- If you use `DROPMALFORMED` mode with `from_xml`, then XML values that do not parse correctly will result in a\n  `null` value for the column. No rows will be dropped.\n- If you use `PERMISSIVE` mode with `from_xml` et al, which is the default, then the parse mode will actually\n  instead default to `DROPMALFORMED`.\n  If however you include a column in the schema for `from_xml` that matches the `columnNameOfCorruptRecord`, then\n  `PERMISSIVE` mode will still output malformed records to that column in the resulting struct. \n\n#### Pyspark notes\n\nThe functions above are exposed in the Scala API only, at the moment, as there is no separate Python package\nfor `spark-xml`. They can be accessed from Pyspark by manually declaring some helper functions that call\ninto the JVM-based API from Python. Example:\n\n```python\nfrom pyspark.sql.column import Column, _to_java_column\nfrom pyspark.sql.types import _parse_datatype_json_string\n\ndef ext_from_xml(xml_column, schema, options={}):\n    java_column = _to_java_column(xml_column.cast('string'))\n    java_schema = spark._jsparkSession.parseDataType(schema.json())\n    scala_map = spark._jvm.org.apache.spark.api.python.PythonUtils.toScalaMap(options)\n    jc = spark._jvm.com.databricks.spark.xml.functions.from_xml(\n        java_column, java_schema, scala_map)\n    return Column(jc)\n\ndef ext_schema_of_xml_df(df, options={}):\n    assert len(df.columns) == 1\n\n    scala_options = spark._jvm.PythonUtils.toScalaMap(options)\n    java_xml_module = getattr(getattr(\n        spark._jvm.com.databricks.spark.xml, \"package$\"), \"MODULE$\")\n    java_schema = java_xml_module.schema_of_xml_df(df._jdf, scala_options)\n    return _parse_datatype_json_string(java_schema.json())\n```\n\n## Structure Conversion\n\nDue to the structure differences between `DataFrame` and XML, there are some conversion rules from XML data to `DataFrame` and from `DataFrame` to XML data. Note that handling attributes can be disabled with the option `excludeAttribute`.\n\n\n### Conversion from XML to `DataFrame`\n\n- __Attributes__: Attributes are converted as fields with the heading prefix, `attributePrefix`.\n\n    ```xml\n    \u003cone myOneAttrib=\"AAAA\"\u003e\n        \u003ctwo\u003etwo\u003c/two\u003e\n        \u003cthree\u003ethree\u003c/three\u003e\n    \u003c/one\u003e\n    ```\n    produces a schema below:\n\n    ```\n    root\n     |-- _myOneAttrib: string (nullable = true)\n     |-- two: string (nullable = true)\n     |-- three: string (nullable = true)\n    ```\n\n- __Value in an element that has no child elements but attributes__: The value is put in a separate field, `valueTag`.\n\n    ```xml\n    \u003cone\u003e\n        \u003ctwo myTwoAttrib=\"BBBBB\"\u003etwo\u003c/two\u003e\n        \u003cthree\u003ethree\u003c/three\u003e\n    \u003c/one\u003e\n    ```\n    produces a schema below:\n    ```\n    root\n     |-- two: struct (nullable = true)\n     |    |-- _VALUE: string (nullable = true)\n     |    |-- _myTwoAttrib: string (nullable = true)\n     |-- three: string (nullable = true)\n    ```\n\n### Conversion from `DataFrame` to XML\n\n- __Element as an array in an array__:  Writing a XML file from `DataFrame` having a field `ArrayType` with its element as `ArrayType` would have an additional nested field for the element. This would not happen in reading and writing XML data but writing a `DataFrame` read from other sources. Therefore, roundtrip in reading and writing XML files has the same structure but writing a `DataFrame` read from other sources is possible to have a different structure.\n\n    `DataFrame` with a schema below:\n    ```\n     |-- a: array (nullable = true)\n     |    |-- element: array (containsNull = true)\n     |    |    |-- element: string (containsNull = true)\n    ```\n\n    with data below:\n    ```\n    +------------------------------------+\n    |                                   a|\n    +------------------------------------+\n    |[WrappedArray(aa), WrappedArray(bb)]|\n    +------------------------------------+\n    ```\n\n    produces a XML file below:\n    ```xml\n    \u003ca\u003e\n        \u003citem\u003eaa\u003c/item\u003e\n    \u003c/a\u003e\n    \u003ca\u003e\n        \u003citem\u003ebb\u003c/item\u003e\n    \u003c/a\u003e\n    ```\n\n\n## Examples\n\nThese examples use a XML file available for download [here](https://github.com/databricks/spark-xml/raw/master/src/test/resources/books.xml):\n\n```\n$ wget https://github.com/databricks/spark-xml/raw/master/src/test/resources/books.xml\n```\n\n### SQL API\n\nXML data source for Spark can infer data types:\n```sql\nCREATE TABLE books\nUSING com.databricks.spark.xml\nOPTIONS (path \"books.xml\", rowTag \"book\")\n```\n\nYou can also specify column names and types in DDL. In this case, we do not infer schema.\n```sql\nCREATE TABLE books (author string, description string, genre string, _id string, price double, publish_date string, title string)\nUSING com.databricks.spark.xml\nOPTIONS (path \"books.xml\", rowTag \"book\")\n```\n\n### Scala API\n\nImport `com.databricks.spark.xml._` to get implicits that add the `.xml(...)` method to `DataFrame`.\nYou can also use `.format(\"xml\")` and `.load(...)`.\n\n```scala\nimport org.apache.spark.sql.SparkSession\nimport com.databricks.spark.xml._\n\nval spark = SparkSession.builder().getOrCreate()\nval df = spark.read\n  .option(\"rowTag\", \"book\")\n  .xml(\"books.xml\")\n\nval selectedData = df.select(\"author\", \"_id\")\nselectedData.write\n  .option(\"rootTag\", \"books\")\n  .option(\"rowTag\", \"book\")\n  .xml(\"newbooks.xml\")\n```\n\nYou can manually specify the schema when reading data:\n\n```scala\nimport org.apache.spark.sql.SparkSession\nimport org.apache.spark.sql.types.{StructType, StructField, StringType, DoubleType}\nimport com.databricks.spark.xml._\n\nval spark = SparkSession.builder().getOrCreate()\nval customSchema = StructType(Array(\n  StructField(\"_id\", StringType, nullable = true),\n  StructField(\"author\", StringType, nullable = true),\n  StructField(\"description\", StringType, nullable = true),\n  StructField(\"genre\", StringType, nullable = true),\n  StructField(\"price\", DoubleType, nullable = true),\n  StructField(\"publish_date\", StringType, nullable = true),\n  StructField(\"title\", StringType, nullable = true)))\n\n\nval df = spark.read\n  .option(\"rowTag\", \"book\")\n  .schema(customSchema)\n  .xml(\"books.xml\")\n\nval selectedData = df.select(\"author\", \"_id\")\nselectedData.write\n  .option(\"rootTag\", \"books\")\n  .option(\"rowTag\", \"book\")\n  .xml(\"newbooks.xml\")\n```\n\n### Java API\n\n```java\nimport org.apache.spark.sql.SparkSession;\n\nSparkSession spark = SparkSession.builder().getOrCreate();\nDataFrame df = spark.read()\n  .format(\"xml\")\n  .option(\"rowTag\", \"book\")\n  .load(\"books.xml\");\n\ndf.select(\"author\", \"_id\").write()\n  .format(\"xml\")\n  .option(\"rootTag\", \"books\")\n  .option(\"rowTag\", \"book\")\n  .save(\"newbooks.xml\");\n```\n\nYou can manually specify schema:\n```java\nimport org.apache.spark.sql.SparkSession;\nimport org.apache.spark.sql.types.*;\n\nSparkSession spark = SparkSession.builder().getOrCreate();\nStructType customSchema = new StructType(new StructField[] {\n  new StructField(\"_id\", DataTypes.StringType, true, Metadata.empty()),\n  new StructField(\"author\", DataTypes.StringType, true, Metadata.empty()),\n  new StructField(\"description\", DataTypes.StringType, true, Metadata.empty()),\n  new StructField(\"genre\", DataTypes.StringType, true, Metadata.empty()),\n  new StructField(\"price\", DataTypes.DoubleType, true, Metadata.empty()),\n  new StructField(\"publish_date\", DataTypes.StringType, true, Metadata.empty()),\n  new StructField(\"title\", DataTypes.StringType, true, Metadata.empty())\n});\n\nDataFrame df = spark.read()\n  .format(\"xml\")\n  .option(\"rowTag\", \"book\")\n  .schema(customSchema)\n  .load(\"books.xml\");\n\ndf.select(\"author\", \"_id\").write()\n  .format(\"xml\")\n  .option(\"rootTag\", \"books\")\n  .option(\"rowTag\", \"book\")\n  .save(\"newbooks.xml\");\n```\n\n### Python API\n\n```python\nfrom pyspark.sql import SparkSession\nspark = SparkSession.builder.getOrCreate()\n\ndf = spark.read.format('xml').options(rowTag='book').load('books.xml')\ndf.select(\"author\", \"_id\").write \\\n    .format('xml') \\\n    .options(rowTag='book', rootTag='books') \\\n    .save('newbooks.xml')\n```\n\nYou can manually specify schema:\n```python\nfrom pyspark.sql import SparkSession\nfrom pyspark.sql.types import *\n\nspark = SparkSession.builder.getOrCreate()\ncustomSchema = StructType([\n    StructField(\"_id\", StringType(), True),\n    StructField(\"author\", StringType(), True),\n    StructField(\"description\", StringType(), True),\n    StructField(\"genre\", StringType(), True),\n    StructField(\"price\", DoubleType(), True),\n    StructField(\"publish_date\", StringType(), True),\n    StructField(\"title\", StringType(), True)])\n\ndf = spark.read \\\n    .format('xml') \\\n    .options(rowTag='book') \\\n    .load('books.xml', schema = customSchema)\n\ndf.select(\"author\", \"_id\").write \\\n    .format('xml') \\\n    .options(rowTag='book', rootTag='books') \\\n    .save('newbooks.xml')\n```\n\n### R API\n\nAutomatically infer schema (data types)\n```R\nlibrary(SparkR)\n\nsparkR.session(\"local[4]\", sparkPackages = c(\"com.databricks:spark-xml_2.12:0.18.0\"))\n\ndf \u003c- read.df(\"books.xml\", source = \"xml\", rowTag = \"book\")\n\n# In this case, `rootTag` is set to \"ROWS\" and `rowTag` is set to \"ROW\".\nwrite.df(df, \"newbooks.csv\", \"xml\", \"overwrite\")\n```\n\nYou can manually specify schema:\n```R\nlibrary(SparkR)\n\nsparkR.session(\"local[4]\", sparkPackages = c(\"com.databricks:spark-xml_2.12:0.18.0\"))\ncustomSchema \u003c- structType(\n  structField(\"_id\", \"string\"),\n  structField(\"author\", \"string\"),\n  structField(\"description\", \"string\"),\n  structField(\"genre\", \"string\"),\n  structField(\"price\", \"double\"),\n  structField(\"publish_date\", \"string\"),\n  structField(\"title\", \"string\"))\n\ndf \u003c- read.df(\"books.xml\", source = \"xml\", schema = customSchema, rowTag = \"book\")\n\n# In this case, `rootTag` is set to \"ROWS\" and `rowTag` is set to \"ROW\".\nwrite.df(df, \"newbooks.csv\", \"xml\", \"overwrite\")\n```\n\n## Hadoop InputFormat\n\nThe library contains a Hadoop input format for reading XML files by a start tag and an end tag. This is similar with [XmlInputFormat.java](https://github.com/apache/mahout/blob/9d14053c80a1244bdf7157ab02748a492ae9868a/integration/src/main/java/org/apache/mahout/text/wikipedia/XmlInputFormat.java) in [Mahout](https://mahout.apache.org) but supports to read compressed files, different encodings and read elements including attributes,\nwhich you may make direct use of as follows:\n\n```scala\nimport com.databricks.spark.xml.XmlInputFormat\nimport org.apache.spark.SparkContext\nimport org.apache.hadoop.io.{LongWritable, Text}\n\nval sc: SparkContext = _\n\n// This will detect the tags including attributes\nsc.hadoopConfiguration.set(XmlInputFormat.START_TAG_KEY, \"\u003cbook\u003e\")\nsc.hadoopConfiguration.set(XmlInputFormat.END_TAG_KEY, \"\u003c/book\u003e\")\n\nval records = sc.newAPIHadoopFile(\n  \"path\",\n  classOf[XmlInputFormat],\n  classOf[LongWritable],\n  classOf[Text])\n```\n\n## Building From Source\n\nThis library is built with [SBT](https://www.scala-sbt.org/). To build a JAR file simply run `sbt package` from the project root.\n\n## Acknowledgements\n\nThis project was initially created by [HyukjinKwon](https://github.com/HyukjinKwon) and donated to [Databricks](https://databricks.com).\n","funding_links":[],"categories":["Packages"],"sub_categories":["SQL Data Sources"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdatabricks%2Fspark-xml","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdatabricks%2Fspark-xml","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdatabricks%2Fspark-xml/lists"}