{"id":13856857,"url":"https://github.com/frictionlessdata/datapackage-py","last_synced_at":"2025-10-04T01:19:14.452Z","repository":{"id":60720849,"uuid":"44518409","full_name":"frictionlessdata/datapackage-py","owner":"frictionlessdata","description":"A Python library for working with Data Packages.","archived":false,"fork":false,"pushed_at":"2024-03-12T16:10:11.000Z","size":942,"stargazers_count":190,"open_issues_count":3,"forks_count":43,"subscribers_count":15,"default_branch":"main","last_synced_at":"2025-09-30T11:17:23.280Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"https://frictionlessdata.io","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/frictionlessdata.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE.md","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2015-10-19T07:34:46.000Z","updated_at":"2025-09-26T14:15:42.000Z","dependencies_parsed_at":"2022-10-03T18:51:15.973Z","dependency_job_id":"a47c8228-036d-41e2-8142-2e9fd0acf154","html_url":"https://github.com/frictionlessdata/datapackage-py","commit_stats":{"total_commits":672,"total_committers":35,"mean_commits":19.2,"dds":0.6324404761904762,"last_synced_commit":"f1a081001d667358455d0b6e158cf8484df417f9"},"previous_names":[],"tags_count":80,"template":false,"template_full_name":null,"purl":"pkg:github/frictionlessdata/datapackage-py","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/frictionlessdata%2Fdatapackage-py","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/frictionlessdata%2Fdatapackage-py/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/frictionlessdata%2Fdatapackage-py/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/frictionlessdata%2Fdatapackage-py/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/frictionlessdata","download_url":"https://codeload.github.com/frictionlessdata/datapackage-py/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/frictionlessdata%2Fdatapackage-py/sbom","scorecard":{"id":411433,"data":{"date":"2025-08-11","repo":{"name":"github.com/frictionlessdata/datapackage-py","commit":"fd3a9af665cc0e5052437904258b063fb565c6e2"},"scorecard":{"version":"v5.2.1-40-gf6ed084d","commit":"f6ed084d17c9236477efd66e5b258b9d4cc7b389"},"score":3.8,"checks":[{"name":"Code-Review","score":0,"reason":"Found 2/30 approved changesets -- score normalized to 0","details":null,"documentation":{"short":"Determines if the project requires human code review before pull requests (aka merge requests) are merged.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#code-review"}},{"name":"Maintained","score":0,"reason":"0 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 0","details":null,"documentation":{"short":"Determines if the project is \"actively maintained\".","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#maintained"}},{"name":"Dangerous-Workflow","score":10,"reason":"no dangerous workflow patterns detected","details":null,"documentation":{"short":"Determines if the project's GitHub Action workflows avoid dangerous patterns.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#dangerous-workflow"}},{"name":"Token-Permissions","score":0,"reason":"detected GitHub workflow tokens with excessive permissions","details":["Warn: no topLevel permission defined: .github/workflows/general.yml:1","Info: no jobLevel write permissions found"],"documentation":{"short":"Determines if the project's workflows follow the principle of least privilege.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#token-permissions"}},{"name":"Binary-Artifacts","score":10,"reason":"no binaries found in the repo","details":null,"documentation":{"short":"Determines if the project has generated executable (binary) artifacts in the source repository.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#binary-artifacts"}},{"name":"CII-Best-Practices","score":0,"reason":"no effort to earn an OpenSSF best practices badge detected","details":null,"documentation":{"short":"Determines if the project has an OpenSSF (formerly CII) Best Practices Badge.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#cii-best-practices"}},{"name":"Pinned-Dependencies","score":0,"reason":"dependency not pinned by hash detected -- score normalized to 0","details":["Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/general.yml:44: update your workflow using https://app.stepsecurity.io/secureworkflow/frictionlessdata/datapackage-py/general.yml/main?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/general.yml:46: update your workflow using https://app.stepsecurity.io/secureworkflow/frictionlessdata/datapackage-py/general.yml/main?enable=pin","Warn: third-party GitHubAction not pinned by hash: .github/workflows/general.yml:57: update your workflow using https://app.stepsecurity.io/secureworkflow/frictionlessdata/datapackage-py/general.yml/main?enable=pin","Warn: third-party GitHubAction not pinned by hash: .github/workflows/general.yml:61: update your workflow using https://app.stepsecurity.io/secureworkflow/frictionlessdata/datapackage-py/general.yml/main?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/general.yml:24: update your workflow using https://app.stepsecurity.io/secureworkflow/frictionlessdata/datapackage-py/general.yml/main?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/general.yml:26: update your workflow using https://app.stepsecurity.io/secureworkflow/frictionlessdata/datapackage-py/general.yml/main?enable=pin","Warn: third-party GitHubAction not pinned by hash: .github/workflows/general.yml:34: update your workflow using https://app.stepsecurity.io/secureworkflow/frictionlessdata/datapackage-py/general.yml/main?enable=pin","Warn: pipCommand not pinned by hash: .github/workflows/general.yml:51","Warn: pipCommand not pinned by hash: .github/workflows/general.yml:52","Info:   0 out of   4 GitHub-owned GitHubAction dependencies pinned","Info:   0 out of   3 third-party GitHubAction dependencies pinned","Info:   0 out of   2 pipCommand dependencies pinned"],"documentation":{"short":"Determines if the project has declared and pinned the dependencies of its build process.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#pinned-dependencies"}},{"name":"Vulnerabilities","score":10,"reason":"0 existing vulnerabilities detected","details":null,"documentation":{"short":"Determines if the project has open, known unfixed vulnerabilities.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#vulnerabilities"}},{"name":"Fuzzing","score":0,"reason":"project is not fuzzed","details":["Warn: no fuzzer integrations found"],"documentation":{"short":"Determines if the project uses fuzzing.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#fuzzing"}},{"name":"License","score":10,"reason":"license file detected","details":["Info: project has a license file: LICENSE.md:0","Info: FSF or OSI recognized license: MIT License: LICENSE.md:0"],"documentation":{"short":"Determines if the project has defined a license.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#license"}},{"name":"Packaging","score":-1,"reason":"packaging workflow not detected","details":["Warn: no GitHub/GitLab publishing workflow detected."],"documentation":{"short":"Determines if the project is published as a package that others can easily download, install, easily update, and uninstall.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#packaging"}},{"name":"Signed-Releases","score":-1,"reason":"no releases found","details":null,"documentation":{"short":"Determines if the project cryptographically signs release artifacts.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#signed-releases"}},{"name":"Security-Policy","score":0,"reason":"security policy file not detected","details":["Warn: no security policy file detected","Warn: no security file to analyze","Warn: no security file to analyze","Warn: no security file to analyze"],"documentation":{"short":"Determines if the project has published a security policy.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#security-policy"}},{"name":"Branch-Protection","score":-1,"reason":"internal error: error during branchesHandler.setup: internal error: githubv4.Query: Resource not accessible by integration","details":null,"documentation":{"short":"Determines if the default and release branches are protected with GitHub's branch protection settings.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#branch-protection"}},{"name":"SAST","score":0,"reason":"SAST tool is not run on all commits -- score normalized to 0","details":["Warn: 0 commits out of 2 are checked with a SAST tool"],"documentation":{"short":"Determines if the project uses static code analysis.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#sast"}}]},"last_synced_at":"2025-08-18T22:48:10.325Z","repository_id":60720849,"created_at":"2025-08-18T22:48:10.325Z","updated_at":"2025-08-18T22:48:10.325Z"},"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":278252223,"owners_count":25956264,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-10-03T02:00:06.070Z","response_time":53,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-05T03:01:16.202Z","updated_at":"2025-10-04T01:19:14.423Z","avatar_url":"https://github.com/frictionlessdata.png","language":"Python","readme":"# datapackage-py\n\n[![Build](https://img.shields.io/github/workflow/status/frictionlessdata/datapackage-py/general/main)](https://github.com/frictionlessdata/datapackage-py/actions)\n[![Coverage](https://img.shields.io/codecov/c/github/frictionlessdata/datapackage-py/main)](https://codecov.io/gh/frictionlessdata/datapackage-py)\n[![Release](https://img.shields.io/pypi/v/datapackage.svg)](https://pypi.python.org/pypi/datapackage)\n[![Codebase](https://img.shields.io/badge/codebase-github-brightgreen)](https://github.com/frictionlessdata/datapackage-py)\n[![Support](https://img.shields.io/badge/support-discord-brightgreen)](https://discordapp.com/invite/Sewv6av)\n\nA library for working with [Data Packages](http://specs.frictionlessdata.io/data-package/).\n\n\u003e **[Important Notice]** We have released [Frictionless Framework](https://github.com/frictionlessdata/framework). This framework provides improved `datapackage` functionality extended to be a complete data solution. The change in not breaking for the existing software so no actions are required. Please read the [Migration Guide](https://framework.frictionlessdata.io/docs/codebase/migration.html) from `datapackage` to Frictionless Framework.\n\n## Features\n\n - `Package` class for working with data packages\n - `Resource` class for working with data resources\n - `Profile` class for working with profiles\n - `validate` function for validating data package descriptors\n - `infer` function for inferring data package descriptors\n\n## Contents\n\n\u003c!--TOC--\u003e\n\n  - [Getting Started](#getting-started)\n    - [Installation](#installation)\n  - [Documentation](#documentation)\n    - [Introduction](#introduction)\n    - [Working with Package](#working-with-package)\n    - [Working with Resource](#working-with-resource)\n    - [Working with Group](#working-with-group)\n    - [Working with Profile](#working-with-profile)\n    - [Working with Foreign Keys](#working-with-foreign-keys)\n    - [Working with validate/infer](#working-with-validateinfer)\n    - [Frequently Asked Questions](#frequently-asked-questions)\n  - [API Reference](#api-reference)\n    - [`cli`](#cli)\n    - [`Package`](#package)\n    - [`Resource`](#resource)\n    - [`Group`](#group)\n    - [`Profile`](#profile)\n    - [`validate`](#validate)\n    - [`infer`](#infer)\n    - [`DataPackageException`](#datapackageexception)\n    - [`TableSchemaException`](#tableschemaexception)\n    - [`LoadError`](#loaderror)\n    - [`CastError`](#casterror)\n    - [`IntegrityError`](#integrityerror)\n    - [`RelationError`](#relationerror)\n    - [`StorageError`](#storageerror)\n  - [Contributing](#contributing)\n  - [Changelog](#changelog)\n\n\u003c!--TOC--\u003e\n\n## Getting Started\n\n### Installation\n\nThe package use semantic versioning. It means that major versions  could include breaking changes. It's highly recommended to specify `datapackage` version range in your `setup/requirements` file e.g. `datapackage\u003e=1.0,\u003c2.0`.\n\n```bash\n$ pip install datapackage\n```\n\n#### OSX 10.14+\nIf you receive an error about the `cchardet` package when installing datapackage on Mac OSX 10.14 (Mojave) or higher, follow these steps:\n1. Make sure you have the latest x-code by running the following in terminal: `xcode-select --install`\n2. Then go to [https://developer.apple.com/download/more/](https://developer.apple.com/download/more/) and download the `command line tools`. Note, this requires an Apple ID.\n3. Then, in terminal, run `open /Library/Developer/CommandLineTools/Packages/macOS_SDK_headers_for_macOS_10.14.pkg`\nYou can read more about these steps in this [post.](https://stackoverflow.com/questions/52509602/cant-compile-c-program-on-a-mac-after-upgrade-to-mojave)\n\n## Documentation\n\n### Introduction\n\nLet's start with a simple example:\n\n```python\nfrom datapackage import Package\n\npackage = Package('datapackage.json')\npackage.get_resource('resource').read()\n```\n\n### Working with Package\n\nA class for working with data packages. It provides various capabilities like loading local or remote data package, inferring a data package descriptor, saving a data package descriptor and many more.\n\nConsider we have some local csv files in a `data` directory. Let's create a data package based on this data using a `Package` class:\n\n\u003e data/cities.csv\n\n```csv\ncity,location\nlondon,\"51.50,-0.11\"\nparis,\"48.85,2.30\"\nrome,\"41.89,12.51\"\n```\n\n\u003e data/population.csv\n\n```csv\ncity,year,population\nlondon,2017,8780000\nparis,2017,2240000\nrome,2017,2860000\n```\n\nFirst we create a blank data package:\n\n```python\npackage = Package()\n```\n\nNow we're ready to infer a data package descriptor based on data files we have. Because we have two csv files we use glob pattern `**/*.csv`:\n\n```python\npackage.infer('**/*.csv')\npackage.descriptor\n#{ profile: 'tabular-data-package',\n#  resources:\n#   [ { path: 'data/cities.csv',\n#       profile: 'tabular-data-resource',\n#       encoding: 'utf-8',\n#       name: 'cities',\n#       format: 'csv',\n#       mediatype: 'text/csv',\n#       schema: [Object] },\n#     { path: 'data/population.csv',\n#       profile: 'tabular-data-resource',\n#       encoding: 'utf-8',\n#       name: 'population',\n#       format: 'csv',\n#       mediatype: 'text/csv',\n#       schema: [Object] } ] }\n```\n\nAn `infer` method has found all our files and inspected it to extract useful metadata like profile, encoding, format, Table Schema etc. Let's tweak it a little bit:\n\n```python\npackage.descriptor['resources'][1]['schema']['fields'][1]['type'] = 'year'\npackage.commit()\npackage.valid # true\n```\n\nBecause our resources are tabular we could read it as a tabular data:\n\n```python\npackage.get_resource('population').read(keyed=True)\n#[ { city: 'london', year: 2017, population: 8780000 },\n#  { city: 'paris', year: 2017, population: 2240000 },\n#  { city: 'rome', year: 2017, population: 2860000 } ]\n```\n\nLet's save our descriptor on the disk as a zip-file:\n\n```python\npackage.save('datapackage.zip')\n```\n\nTo continue the work with the data package we just load it again but this time using local `datapackage.zip`:\n\n```python\npackage = Package('datapackage.zip')\n# Continue the work\n```\n\nIt was onle basic introduction to the `Package` class. To learn more let's take a look on `Package` class API reference.\n\n### Working with Resource\n\nA class for working with data resources. You can read or iterate tabular resources using the `iter/read` methods and all resource as bytes using `row_iter/row_read` methods.\n\nConsider we have some local csv file. It could be inline data or remote link - all supported by `Resource` class (except local files for in-brower usage of course). But say it's `data.csv` for now:\n\n```csv\ncity,location\nlondon,\"51.50,-0.11\"\nparis,\"48.85,2.30\"\nrome,N/A\n```\n\nLet's create and read a resource. Because resource is tabular we could use `resource.read` method with a `keyed` option to get an array of keyed rows:\n\n```python\nresource = Resource({path: 'data.csv'})\nresource.tabular # true\nresource.read(keyed=True)\n# [\n#   {city: 'london', location: '51.50,-0.11'},\n#   {city: 'paris', location: '48.85,2.30'},\n#   {city: 'rome', location: 'N/A'},\n# ]\nresource.headers\n# ['city', 'location']\n# (reading has to be started first)\n```\n\nAs we could see our locations are just a strings. But it should be geopoints. Also Rome's location is not available but it's also just a `N/A` string instead of Python `None`. First we have to infer resource metadata:\n\n```python\nresource.infer()\nresource.descriptor\n#{ path: 'data.csv',\n#  profile: 'tabular-data-resource',\n#  encoding: 'utf-8',\n#  name: 'data',\n#  format: 'csv',\n#  mediatype: 'text/csv',\n# schema: { fields: [ [Object], [Object] ], missingValues: [ '' ] } }\nresource.read(keyed=True)\n# Fails with a data validation error\n```\n\nLet's fix not available location. There is a `missingValues` property in Table Schema specification. As a first try we set `missingValues` to `N/A` in `resource.descriptor.schema`. Resource descriptor could be changed in-place but all changes should be commited by `resource.commit()`:\n\n```python\nresource.descriptor['schema']['missingValues'] = 'N/A'\nresource.commit()\nresource.valid # False\nresource.errors\n# [\u003cValidationError: \"'N/A' is not of type 'array'\"\u003e]\n```\n\nAs a good citiziens we've decided to check out recource descriptor validity. And it's not valid! We should use an array for `missingValues` property. Also don't forget to have an empty string as a missing value:\n\n```python\nresource.descriptor['schema']['missingValues'] = ['', 'N/A']\nresource.commit()\nresource.valid # true\n```\n\nAll good. It looks like we're ready to read our data again:\n\n```python\nresource.read(keyed=True)\n# [\n#   {city: 'london', location: [51.50,-0.11]},\n#   {city: 'paris', location: [48.85,2.30]},\n#   {city: 'rome', location: null},\n# ]\n```\n\nNow we see that:\n- locations are arrays with numeric lattide and longitude\n- Rome's location is a native JavaScript `null`\n\nAnd because there are no errors on data reading we could be sure that our data is valid againt our schema. Let's save our resource descriptor:\n\n```python\nresource.save('dataresource.json')\n```\n\nLet's check newly-crated `dataresource.json`. It contains path to our data file, inferred metadata and our `missingValues` tweak:\n\n```json\n{\n    \"path\": \"data.csv\",\n    \"profile\": \"tabular-data-resource\",\n    \"encoding\": \"utf-8\",\n    \"name\": \"data\",\n    \"format\": \"csv\",\n    \"mediatype\": \"text/csv\",\n    \"schema\": {\n        \"fields\": [\n            {\n                \"name\": \"city\",\n                \"type\": \"string\",\n                \"format\": \"default\"\n            },\n            {\n                \"name\": \"location\",\n                \"type\": \"geopoint\",\n                \"format\": \"default\"\n            }\n        ],\n        \"missingValues\": [\n            \"\",\n            \"N/A\"\n        ]\n    }\n}\n```\n\nIf we decide to improve it even more we could update the `dataresource.json` file and then open it again using local file name:\n\n```python\nresource = Resource('dataresource.json')\n# Continue the work\n```\n\nIt was onle basic introduction to the `Resource` class. To learn more let's take a look on `Resource` class API reference.\n\n### Working with Group\n\nA class representing a group of tabular resources. Groups can be used to read multiple resource as one or to export them, for example, to a database as one table. To define a group add the `group: \u003cname\u003e` field to corresponding resources. The group's metadata will be created from the \"leading\" resource's metadata (the first resource with the group name).\n\nConsider we have a data package with two tables partitioned by a year and a shared schema stored separately:\n\n\u003e  cars-2017.csv\n\n```csv\nname,value\nbmw,2017\ntesla,2017\nnissan,2017\n```\n\n\u003e  cars-2018.csv\n\n```csv\nname,value\nbmw,2018\ntesla,2018\nnissan,2018\n```\n\n\u003e cars.schema.json\n\n```json\n{\n    \"fields\": [\n        {\n            \"name\": \"name\",\n            \"type\": \"string\"\n        },\n        {\n            \"name\": \"value\",\n            \"type\": \"integer\"\n        }\n    ]\n}\n```\n\n\u003e datapackage.json\n\n```json\n{\n    \"name\": \"datapackage\",\n    \"resources\": [\n        {\n            \"group\": \"cars\",\n            \"name\": \"cars-2017\",\n            \"path\": \"cars-2017.csv\",\n            \"profile\": \"tabular-data-resource\",\n            \"schema\": \"cars.schema.json\"\n        },\n        {\n            \"group\": \"cars\",\n            \"name\": \"cars-2018\",\n            \"path\": \"cars-2018.csv\",\n            \"profile\": \"tabular-data-resource\",\n            \"schema\": \"cars.schema.json\"\n        }\n    ]\n}\n```\n\nLet's read the resources separately:\n\n```python\npackage = Package('datapackage.json')\npackage.get_resource('cars-2017').read(keyed=True) == [\n    {'name': 'bmw', 'value': 2017},\n    {'name': 'tesla', 'value': 2017},\n    {'name': 'nissan', 'value': 2017},\n]\npackage.get_resource('cars-2018').read(keyed=True) == [\n    {'name': 'bmw', 'value': 2018},\n    {'name': 'tesla', 'value': 2018},\n    {'name': 'nissan', 'value': 2018},\n]\n```\n\nOn the other hand, these resources defined with a `group: cars` field. It means we can treat them as a group:\n\n```python\npackage = Package('datapackage.json')\npackage.get_group('cars').read(keyed=True) == [\n    {'name': 'bmw', 'value': 2017},\n    {'name': 'tesla', 'value': 2017},\n    {'name': 'nissan', 'value': 2017},\n    {'name': 'bmw', 'value': 2018},\n    {'name': 'tesla', 'value': 2018},\n    {'name': 'nissan', 'value': 2018},\n]\n```\n\nWe can use this approach when we need to save the data package to a storage, for example, to a SQL database. There is the `merge_groups` flag to enable groupping behaviour:\n\n```python\npackage = Package('datapackage.json')\npackage.save(storage='sql', engine=engine)\n# SQL tables:\n# - cars-2017\n# - cars-2018\npackage.save(storage='sql', engine=engine, merge_groups=True)\n# SQL tables:\n# - cars\n```\n\n### Working with Profile\n\nA component to represent JSON Schema profile from [Profiles Registry]( https://specs.frictionlessdata.io/schemas/registry.json):\n\n```python\nprofile = Profile('data-package')\n\nprofile.name # data-package\nprofile.jsonschema # JSON Schema contents\n\ntry:\n   valid = profile.validate(descriptor)\nexcept exceptions.ValidationError as exception:\n   for error in exception.errors:\n       # handle individual error\n```\n\n### Working with Foreign Keys\n\nThe library supports foreign keys described in the [Table Schema](http://specs.frictionlessdata.io/table-schema/#foreign-keys) specification. It means if your data package descriptor use `resources[].schema.foreignKeys` property for some resources a data integrity will be checked on reading operations.\n\nConsider we have a data package:\n\n```python\nDESCRIPTOR = {\n  'resources': [\n    {\n      'name': 'teams',\n      'data': [\n        ['id', 'name', 'city'],\n        ['1', 'Arsenal', 'London'],\n        ['2', 'Real', 'Madrid'],\n        ['3', 'Bayern', 'Munich'],\n      ],\n      'schema': {\n        'fields': [\n          {'name': 'id', 'type': 'integer'},\n          {'name': 'name', 'type': 'string'},\n          {'name': 'city', 'type': 'string'},\n        ],\n        'foreignKeys': [\n          {\n            'fields': 'city',\n            'reference': {'resource': 'cities', 'fields': 'name'},\n          },\n        ],\n      },\n    }, {\n      'name': 'cities',\n      'data': [\n        ['name', 'country'],\n        ['London', 'England'],\n        ['Madrid', 'Spain'],\n      ],\n    },\n  ],\n}\n```\n\nLet's check relations for a `teams` resource:\n\n```python\nfrom datapackage import Package\n\npackage = Package(DESCRIPTOR)\nteams = package.get_resource('teams')\nteams.check_relations()\n# tableschema.exceptions.RelationError: Foreign key \"['city']\" violation in row \"4\"\n```\n\nAs we could see there is a foreign key violation. That's because our lookup table `cities` doesn't have a city of `Munich` but we have a team from there. We need to fix it in `cities` resource:\n\n```python\npackage.descriptor['resources'][1]['data'].append(['Munich', 'Germany'])\npackage.commit()\nteams = package.get_resource('teams')\nteams.check_relations()\n# True\n```\n\nFixed! But not only a check operation is available. We could use `relations` argument for `resource.iter/read` methods to dereference a resource relations:\n\n```python\nteams.read(keyed=True, relations=True)\n#[{'id': 1, 'name': 'Arsenal', 'city': {'name': 'London', 'country': 'England}},\n# {'id': 2, 'name': 'Real', 'city': {'name': 'Madrid', 'country': 'Spain}},\n# {'id': 3, 'name': 'Bayern', 'city': {'name': 'Munich', 'country': 'Germany}}]\n```\n\nInstead of plain city name we've got a dictionary containing a city data. These `resource.iter/read` methods will fail with the same as `resource.check_relations` error if there is an integrity issue. But only if `relations=True` flag is passed.\n\n### Working with validate/infer\n\nA standalone function to validate a data package descriptor:\n\n```python\nfrom datapackage import validate, exceptions\n\ntry:\n    valid = validate(descriptor)\nexcept exceptions.ValidationError as exception:\n   for error in exception.errors:\n       # handle individual error\n```\n\nA standalone function to infer a data package descriptor.\n\n```python\ndescriptor = infer('**/*.csv')\n#{ profile: 'tabular-data-resource',\n#  resources:\n#   [ { path: 'data/cities.csv',\n#       profile: 'tabular-data-resource',\n#       encoding: 'utf-8',\n#       name: 'cities',\n#       format: 'csv',\n#       mediatype: 'text/csv',\n#       schema: [Object] },\n#     { path: 'data/population.csv',\n#       profile: 'tabular-data-resource',\n#       encoding: 'utf-8',\n#       name: 'population',\n#       format: 'csv',\n#       mediatype: 'text/csv',\n#       schema: [Object] } ] }\n```\n\n### Frequently Asked Questions\n\n#### Accessing data behind a proxy server?\n\nBefore the `package = Package(\"https://xxx.json\")` call set these environment variables:\n\n```python\nimport os\n\nos.environ[\"HTTP_PROXY\"] = 'xxx'\nos.environ[\"HTTPS_PROXY\"] = 'xxx'\n```\n\n## API Reference\n\n### `cli`\n```python\ncli()\n```\nCommand-line interface\n\n```\nUsage: datapackage [OPTIONS] COMMAND [ARGS]...\n\nOptions:\n  --version  Show the version and exit.\n  --help     Show this message and exit.\n\nCommands:\n  infer\n  validate\n```\n\n\n### `Package`\n```python\nPackage(self,\n        descriptor=None,\n        base_path=None,\n        strict=False,\n        unsafe=False,\n        storage=None,\n        schema=None,\n        default_base_path=None,\n        **options)\n```\nPackage representation\n\n__Arguments__\n- __descriptor (str/dict)__: data package descriptor as local path, url or object\n- __base_path (str)__: base path for all relative paths\n- __strict (bool)__: strict flag to alter validation behavior.\n        Setting it to `True` leads to throwing errors\n        on any operation with invalid descriptor\n- __unsafe (bool)__:\n        if `True` unsafe paths will be allowed. For more inforamtion\n        https://specs.frictionlessdata.io/data-resource/#data-location.\n        Default to `False`\n- __storage (str/tableschema.Storage)__: storage name like `sql` or storage instance\n- __options (dict)__: storage options to use for storage creation\n\n__Raises__\n- `DataPackageException`: raises error if something goes wrong\n\n\n\n#### `package.base_path`\nPackage's base path\n\n__Returns__\n\n`str/None`: returns the data package base path\n\n\n\n#### `package.descriptor`\nPackage's descriptor\n\n__Returns__\n\n`dict`: descriptor\n\n\n\n#### `package.errors`\nValidation errors\n\nAlways empty in strict mode.\n\n__Returns__\n\n`Exception[]`: validation errors\n\n\n\n#### `package.profile`\nPackage's profile\n\n__Returns__\n\n`Profile`: an instance of `Profile` class\n\n\n\n#### `package.resource_names`\nPackage's resource names\n\n__Returns__\n\n`str[]`: returns an array of resource names\n\n\n\n#### `package.resources`\nPackage's resources\n\n__Returns__\n\n`Resource[]`: returns an array of `Resource` instances\n\n\n\n#### `package.valid`\nValidation status\n\nAlways true in strict mode.\n\n__Returns__\n\n`bool`: validation status\n\n\n\n#### `package.get_resource`\n```python\npackage.get_resource(name)\n```\nGet data package resource by name.\n\n__Arguments__\n- __name (str)__: data resource name\n\n__Returns__\n\n`Resource/None`: returns `Resource` instances or null if not found\n\n\n\n#### `package.add_resource`\n```python\npackage.add_resource(descriptor)\n```\nAdd new resource to data package.\n\nThe data package descriptor will be validated with newly added resource descriptor.\n\n__Arguments__\n- __descriptor (dict)__: data resource descriptor\n\n__Raises__\n- `DataPackageException`: raises error if something goes wrong\n\n__Returns__\n\n`Resource/None`: returns added `Resource` instance or null if not added\n\n\n\n#### `package.remove_resource`\n```python\npackage.remove_resource(name)\n```\nRemove data package resource by name.\n\nThe data package descriptor will be validated after resource descriptor removal.\n\n__Arguments__\n- __name (str)__: data resource name\n\n__Raises__\n- `DataPackageException`: raises error if something goes wrong\n\n__Returns__\n\n`Resource/None`: returns removed `Resource` instances or null if not found\n\n\n\n#### `package.get_group`\n```python\npackage.get_group(name)\n```\nReturns a group of tabular resources by name.\n\nFor more information about groups see [Group](#group).\n\n__Arguments__\n- __name (str)__: name of a group of resources\n\n__Raises__\n- `DataPackageException`: raises error if something goes wrong\n\n__Returns__\n\n`Group/None`: returns a `Group` instance or null if not found\n\n\n\n#### `package.infer`\n```python\npackage.infer(pattern=False)\n```\nInfer a data package metadata.\n\n\u003e Argument `pattern` works only for local files\n\nIf `pattern` is not provided only existent resources will be inferred\n(added metadata like encoding, profile etc). If `pattern` is provided\nnew resoures with file names mathing the pattern will be added and inferred.\nIt commits changes to data package instance.\n\n__Arguments__\n- __pattern (str)__: glob pattern for new resources\n\n__Returns__\n\n`dict`: returns data package descriptor\n\n\n\n#### `package.commit`\n```python\npackage.commit(strict=None)\n```\nUpdate data package instance if there are in-place changes in the descriptor.\n\n__Example__\n\n\n```python\npackage = Package({\n    'name': 'package',\n    'resources': [{'name': 'resource', 'data': ['data']}]\n})\n\npackage.name # package\npackage.descriptor['name'] = 'renamed-package'\npackage.name # package\npackage.commit()\npackage.name # renamed-package\n```\n\n__Arguments__\n- __strict (bool)__: alter `strict` mode for further work\n\n__Raises__\n- `DataPackageException`: raises error if something goes wrong\n\n__Returns__\n\n`bool`: returns true on success and false if not modified\n\n\n\n#### `package.save`\n```python\npackage.save(target=None,\n             storage=None,\n             merge_groups=False,\n             to_base_path=False,\n             **options)\n```\nSaves this data package\n\nIt saves it to storage if `storage` argument is passed or\nsaves this data package's descriptor to json file if `target` arguments\nends with `.json` or saves this data package to zip file otherwise.\n\n__Example__\n\n\nIt creates a zip file into ``file_or_path`` with the contents\nof this Data Package and its resources. Every resource which content\nlives in the local filesystem will be copied to the zip file.\nConsider the following Data Package descriptor:\n\n```json\n{\n    \"name\": \"gdp\",\n    \"resources\": [\n        {\"name\": \"local\", \"format\": \"CSV\", \"path\": \"data.csv\"},\n        {\"name\": \"inline\", \"data\": [4, 8, 15, 16, 23, 42]},\n        {\"name\": \"remote\", \"url\": \"http://someplace.com/data.csv\"}\n    ]\n}\n```\n\nThe final structure of the zip file will be:\n\n```\n./datapackage.json\n./data/local.csv\n```\n\nWith the contents of `datapackage.json` being the same as\nreturned `datapackage.descriptor`. The resources' file names are generated\nbased on their `name` and `format` fields if they exist.\nIf the resource has no `name`, it'll be used `resource-X`,\nwhere `X` is the index of the resource in the `resources` list (starting at zero).\nIf the resource has `format`, it'll be lowercased and appended to the `name`,\nbecoming \"`name.format`\".\n\n__Arguments__\n- __target (string/filelike)__:\n        the file path or a file-like object where\n        the contents of this Data Package will be saved into.\n- __storage (str/tableschema.Storage)__:\n        storage name like `sql` or storage instance\n- __merge_groups (bool)__:\n        save all the group's tabular resoruces into one bucket\n        if a storage is provided (for example into one SQL table).\n        Read more about [Group](#group).\n- __to_base_path (bool)__:\n        save the package to the package's base path\n        using the \"\\\u003cbase_path\\\u003e/\\\u003ctarget\\\u003e\" route\n- __options (dict)__:\n        storage options to use for storage creation\n\n__Raises__\n- `DataPackageException`: raises if there was some error writing the package\n\n__Returns__\n\n`bool/Storage`: on success return true or a `Storage` instance\n\n### `Resource`\n```python\nResource(self,\n         descriptor={},\n         base_path=None,\n         strict=False,\n         unsafe=False,\n         storage=None,\n         package=None,\n         **options)\n```\nResource represenation\n\n__Arguments__\n- __descriptor (str/dict)__: data resource descriptor as local path, url or object\n- __base_path (str)__: base path for all relative paths\n- __strict (bool)__:\n        strict flag to alter validation behavior.  Setting it to `true`\n        leads to throwing errors on any operation with invalid descriptor\n- __unsafe (bool)__:\n        if `True` unsafe paths will be allowed. For more inforamtion\n        https://specs.frictionlessdata.io/data-resource/#data-location.\n        Default to `False`\n- __storage (str/tableschema.Storage)__: storage name like `sql` or storage instance\n- __options (dict)__: storage options to use for storage creation\n\n__Raises__\n- `DataPackageException`: raises error if something goes wrong\n\n\n\n#### `resource.data`\nReturn resource data\n\n\n#### `resource.descriptor`\nPackage's descriptor\n\n__Returns__\n\n`dict`: descriptor\n\n\n\n#### `resource.errors`\nValidation errors\n\nAlways empty in strict mode.\n\n__Returns__\n\n`Exception[]`: validation errors\n\n\n\n#### `resource.group`\nGroup name\n\n__Returns__\n\n`str`: group name\n\n\n\n#### `resource.headers`\nResource's headers\n\n\u003e Only for tabular resources (reading has to be started first or it's `None`)\n\n__Returns__\n\n`str[]/None`: returns data source headers\n\n\n\n#### `resource.inline`\nWhether resource inline\n\n__Returns__\n\n`bool`: returns true if resource is inline\n\n\n\n#### `resource.local`\nWhether resource local\n\n__Returns__\n\n`bool`: returns true if resource is local\n\n\n\n#### `resource.multipart`\nWhether resource multipart\n\n__Returns__\n\n`bool`: returns true if resource is multipart\n\n\n\n#### `resource.name`\nResource name\n\n__Returns__\n\n`str`: name\n\n\n\n#### `resource.package`\nPackage instance if the resource belongs to some package\n\n__Returns__\n\n`Package/None`: a package instance if available\n\n\n\n#### `resource.profile`\nResource's profile\n\n__Returns__\n\n`Profile`: an instance of `Profile` class\n\n\n\n#### `resource.remote`\nWhether resource remote\n\n__Returns__\n\n`bool`: returns true if resource is remote\n\n\n\n#### `resource.schema`\nResource's schema\n\n\u003e Only for tabular resources\n\nFor tabular resources it returns `Schema` instance to interact with data schema.\nRead API documentation - [tableschema.Schema](https://github.com/frictionlessdata/tableschema-py#schema).\n\n__Returns__\n\n`tableschema.Schema`: schema\n\n\n\n#### `resource.source`\nResource's source\n\nCombination of `resource.source` and `resource.inline/local/remote/multipart`\nprovides predictable interface to work with resource data.\n\n__Returns__\n\n`list/str`: returns `data` or `path` property\n\n\n\n#### `resource.table`\nReturn resource table\n\n\n#### `resource.tabular`\nWhether resource tabular\n\n__Returns__\n\n`bool`: returns true if resource is tabular\n\n\n\n#### `resource.valid`\nValidation status\n\nAlways true in strict mode.\n\n__Returns__\n\n`bool`: validation status\n\n\n\n#### `resource.iter`\n```python\nresource.iter(integrity=False, relations=False, **options)\n```\nIterates through the resource data and emits rows cast based on table schema.\n\n\u003e Only for tabular resources\n\n__Arguments__\n\n\n    keyed (bool):\n        yield keyed rows in a form of `{header1: value1, header2: value2}`\n        (default is false; the form of rows is `[value1, value2]`)\n\n    extended (bool):\n        yield extended rows in a for of `[rowNumber, [header1, header2], [value1, value2]]`\n        (default is false; the form of rows is `[value1, value2]`)\n\n    cast (bool):\n        disable data casting if false\n        (default is true)\n\n    integrity (bool):\n        if true actual size in BYTES and SHA256 hash of the file\n        will be checked against `descriptor.bytes` and `descriptor.hash`\n        (other hashing algorithms are not supported and will be skipped silently)\n\n    relations (bool):\n        if true foreign key fields will be checked and resolved to its references\n\n    foreign_keys_values (dict):\n        three-level dictionary of foreign key references optimized\n        to speed up validation process in a form of\n        `{resource1: {(fk_field1, fk_field2): {(value1, value2): {one_keyedrow}, ... }}}`.\n        If not provided but relations is true, it will be created\n        before the validation process by *index_foreign_keys_values* method\n\n    exc_handler (func):\n        optional custom exception handler callable.\n        Can be used to defer raising errors (i.e. \"fail late\"), e.g.\n        for data validation purposes. Must support the signature below\n\n__Custom exception handler__\n\n\n```python\ndef exc_handler(exc, row_number=None, row_data=None, error_data=None):\n    '''Custom exception handler (example)\n\n    # Arguments:\n        exc(Exception):\n            Deferred exception instance\n        row_number(int):\n            Data row number that triggers exception exc\n        row_data(OrderedDict):\n            Invalid data row source data\n        error_data(OrderedDict):\n            Data row source data field subset responsible for the error, if\n            applicable (e.g. invalid primary or foreign key fields). May be\n            identical to row_data.\n    '''\n    # ...\n```\n\n__Raises__\n- `DataPackageException`: base class of any error\n- `CastError`: data cast error\n- `IntegrityError`: integrity checking error\n- `UniqueKeyError`: unique key constraint violation\n- `UnresolvedFKError`: unresolved foreign key reference error\n\n__Returns__\n\n`Iterator[list]`: yields rows\n\n\n\n#### `resource.read`\n```python\nresource.read(integrity=False,\n              relations=False,\n              foreign_keys_values=False,\n              **options)\n```\nRead the whole resource and return as array of rows\n\n\u003e Only for tabular resources\n\u003e It has the same API as `resource.iter` except for\n\n__Arguments__\n- __limit (int)__: limit count of rows to read and return\n\n__Returns__\n\n`list[]`: returns rows\n\n\n\n#### `resource.check_integrity`\n```python\nresource.check_integrity()\n```\nChecks resource integrity\n\n\u003e Only for tabular resources\n\nIt checks size in BYTES and SHA256 hash of the file\nagainst `descriptor.bytes` and `descriptor.hash`\n(other hashing algorithms are not supported and will be skipped silently).\n\n__Raises__\n- `exceptions.IntegrityError`: raises if there are integrity issues\n\n__Returns__\n\n`bool`: returns True if no issues\n\n\n\n#### `resource.check_relations`\n```python\nresource.check_relations(foreign_keys_values=False)\n```\nCheck relations\n\n\u003e Only for tabular resources\n\nIt checks foreign keys and raises an exception if there are integrity issues.\n\n__Raises__\n- `exceptions.RelationError`: raises if there are relation issues\n\n__Returns__\n\n`bool`: returns True if no issues\n\n\n\n#### `resource.drop_relations`\n```python\nresource.drop_relations()\n```\nDrop relations\n\n\u003e Only for tabular resources\n\nRemove relations data from memory\n\n__Returns__\n\n`bool`: returns True\n\n\n\n#### `resource.raw_iter`\n```python\nresource.raw_iter(stream=False)\n```\nIterate over data chunks as bytes.\n\nIf `stream` is true File-like object will be returned.\n\n__Arguments__\n- __stream (bool)__: File-like object will be returned\n\n__Returns__\n\n`bytes[]/filelike`: returns bytes[]/filelike\n\n\n\n#### `resource.raw_read`\n```python\nresource.raw_read()\n```\nReturns resource data as bytes.\n\n__Returns__\n\n`bytes`: returns resource data in bytes\n\n\n\n#### `resource.infer`\n```python\nresource.infer(**options)\n```\nInfer resource metadata\n\nLike name, format, mediatype, encoding, schema and profile.\nIt commits this changes into resource instance.\n\n__Arguments__\n- __options__:\n        options will be passed to `tableschema.infer` call,\n        for more control on results (e.g. for setting `limit`, `confidence` etc.).\n\n__Returns__\n\n`dict`: returns resource descriptor\n\n\n\n#### `resource.commit`\n```python\nresource.commit(strict=None)\n```\nUpdate resource instance if there are in-place changes in the descriptor.\n\n__Arguments__\n- __strict (bool)__: alter `strict` mode for further work\n\n__Raises__\n- `DataPackageException`: raises error if something goes wrong\n\n__Returns__\n\n`bool`: returns true on success and false if not modified\n\n\n\n#### `resource.save`\n```python\nresource.save(target, storage=None, to_base_path=False, **options)\n```\nSaves this resource\n\nInto storage if `storage` argument is passed or\nsaves this resource's descriptor to json file otherwise.\n\n__Arguments__\n- __target (str)__:\n        path where to save a resource\n- __storage (str/tableschema.Storage)__:\n        storage name like `sql` or storage instance\n- __to_base_path (bool)__:\n        save the resource to the resource's base path\n        using the \"\\\u003cbase_path\\\u003e/\\\u003ctarget\\\u003e\" route\n- __options (dict)__:\n        storage options to use for storage creation\n\n__Raises__\n- `DataPackageException`: raises error if something goes wrong\n\n__Returns__\n\n`bool`: returns true on success\nBuilding index...\nStarted generating documentation...\n\n### `Group`\n```python\nGroup(self, resources)\n```\nGroup representation\n\n__Arguments__\n- __Resource[]__: list of TABULAR resources\n\n\n\n#### `group.headers`\nGroup's headers\n\n__Returns__\n\n`str[]/None`: returns headers\n\n\n\n#### `group.name`\nGroup name\n\n__Returns__\n\n`str`: name\n\n\n\n#### `group.schema`\nResource's schema\n\n__Returns__\n\n`tableschema.Schema`: schema\n\n\n\n#### `group.iter`\n```python\ngroup.iter(**options)\n```\nIterates through the group data and emits rows cast based on table schema.\n\n\u003e It concatenates all the resources and has the same API as `resource.iter`\n\n\n\n#### `group.read`\n```python\ngroup.read(limit=None, **options)\n```\nRead the whole group and return as array of rows\n\n\u003e It concatenates all the resources and has the same API as `resource.read`\n\n\n\n#### `group.check_relations`\n```python\ngroup.check_relations()\n```\nCheck group's relations\n\nThe same as `resource.check_relations` but without the optional\nargument *foreign_keys_values*.  This method will test foreignKeys of the\nwhole group at once otpimizing the process by creating the foreign_key_values\nhashmap only once before testing the set of resources.\n\n\n### `Profile`\n```python\nProfile(self, profile)\n```\nProfile representation\n\n__Arguments__\n- __profile (str)__: profile name in registry or URL to JSON Schema\n\n__Raises__\n- `DataPackageException`: raises error if something goes wrong\n\n\n\n#### `profile.jsonschema`\nJSONSchema content\n\n__Returns__\n\n`dict`: returns profile's JSON Schema contents\n\n\n\n#### `profile.name`\nProfile name\n\n__Returns__\n\n`str/None`: name if available\n\n\n\n#### `profile.validate`\n```python\nprofile.validate(descriptor)\n```\nValidate a data package `descriptor` against the profile.\n\n__Arguments__\n- __descriptor (dict)__: retrieved and dereferenced data package descriptor\n\n__Raises__\n- `ValidationError`: raises if not valid\n__Returns__\n\n`bool`: returns True if valid\n\n\n### `validate`\n```python\nvalidate(descriptor)\n```\nValidate a data package descriptor.\n\n__Arguments__\n- __descriptor (str/dict)__: package descriptor (one of):\n      - local path\n      - remote url\n      - object\n\n__Raises__\n- `ValidationError`: raises on invalid\n\n__Returns__\n\n`bool`: returns true on valid\n\n\n### `infer`\n```python\ninfer(pattern, base_path=None)\n```\nInfer a data package descriptor.\n\n\u003e Argument `pattern` works only for local files\n\n__Arguments__\n- __pattern (str)__: glob file pattern\n\n__Returns__\n\n`dict`: returns data package descriptor\n\n\n### `DataPackageException`\n```python\nDataPackageException(self, message, errors=[])\n```\nBase class for all DataPackage/TableSchema exceptions.\n\nIf there are multiple errors, they can be read from the exception object:\n\n```python\ntry:\n    # lib action\nexcept DataPackageException as exception:\n    if exception.multiple:\n        for error in exception.errors:\n            # handle error\n```\n\n\n\n#### `datapackageexception.errors`\nList of nested errors\n\n__Returns__\n\n`DataPackageException[]`: list of nested errors\n\n\n\n#### `datapackageexception.multiple`\nWhether it's a nested exception\n\n__Returns__\n\n`bool`: whether it's a nested exception\n\n\n\n### `TableSchemaException`\n```python\nTableSchemaException(self, message, errors=[])\n```\nBase class for all TableSchema exceptions.\n\n\n### `LoadError`\n```python\nLoadError(self, message, errors=[])\n```\nAll loading errors.\n\n\n### `CastError`\n```python\nCastError(self, message, errors=[])\n```\nAll value cast errors.\n\n\n### `IntegrityError`\n```python\nIntegrityError(self, message, errors=[])\n```\nAll integrity errors.\n\n\n### `RelationError`\n```python\nRelationError(self, message, errors=[])\n```\nAll relations errors.\n\n\n### `StorageError`\n```python\nStorageError(self, message, errors=[])\n```\nAll storage errors.\n\n\n## Contributing\n\n\u003e The project follows the [Open Knowledge International coding standards](https://github.com/okfn/coding-standards).\n\nRecommended way to get started is to create and activate a project virtual environment.\nTo install package and development dependencies into active environment:\n\n```bash\n$ make install\n```\n\nTo run tests with linting and coverage:\n\n```bash\n$ make test\n```\n\n## Changelog\n\nHere described only breaking and the most important changes. The full changelog and documentation for all released versions could be found in nicely formatted [commit history](https://github.com/frictionlessdata/datapackage-py/commits/master).\n\n#### v1.15\n\n\u003e WARNING: it can be breaking for some setups, please read the discussions below\n\n- Fixed header management according to the specs:\n    - https://github.com/frictionlessdata/datapackage-py/pull/257\n    - https://github.com/frictionlessdata/datapackage-py/issues/256\n    - https://github.com/frictionlessdata/forum/issues/1\n\n#### v1.14\n\n- Add experimental options for pick/skiping fileds/rows\n\n#### v1.13\n\n- Add `unsafe` option to Package and Resource (#262)\n\n#### v1.12\n\n- Use `chardet` for encoding deteciton by default. For `cchardet`: `pip install datapackage[cchardet]`\n\n#### v1.11\n\n- `resource/package.save` now accept a `to_base_path` argument (#254)\n- `package.save` now returns a `Storage` instance if available\n\n#### v1.10\n\n- Added an ability to check tabular resource's integrity\n\n#### v1.9\n\n- Added `resource.package` property\n\n#### v1.8\n\n- Added support for [groups of resources](#group)\n\n#### v1.7\n\n- Added support for [compression of resources](https://frictionlessdata.io/specs/patterns/#compression-of-resources)\n\n#### v1.6\n\n- Added support for custom request session\n\n#### v1.5\n\nUpdated behaviour:\n- Added support for Python 3.7\n\n#### v1.4\n\nNew API added:\n- added `skip_rows` support to the resource descriptor\n\n#### v1.3\n\nNew API added:\n- property `package.base_path` is now publicly available\n\n#### v1.2\n\nUpdated behaviour:\n- CLI command `$ datapackage infer` now outputs only a JSON-formatted data package descriptor.\n\n#### v1.1\n\nNew API added:\n- Added an integration between `Package/Resource` and the `tableschema.Storage` - https://github.com/frictionlessdata/tableschema-py#storage. It allows to load and save data package from/to different storages like SQL/BigQuery/etc.\n","funding_links":[],"categories":["Python"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffrictionlessdata%2Fdatapackage-py","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ffrictionlessdata%2Fdatapackage-py","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffrictionlessdata%2Fdatapackage-py/lists"}