Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/abhijithneilabraham/tableqa
AI Tool for querying natural language on tabular data.
https://github.com/abhijithneilabraham/tableqa
ai csv database machine-learning nl2sql nlp qa querying-natural-language question-answering sql sql-generation sql-query table-qa tableqa tabular-data
Last synced: about 6 hours ago
JSON representation
AI Tool for querying natural language on tabular data.
- Host: GitHub
- URL: https://github.com/abhijithneilabraham/tableqa
- Owner: abhijithneilabraham
- License: gpl-3.0
- Created: 2020-07-31T19:26:44.000Z (over 4 years ago)
- Default Branch: master
- Last Pushed: 2023-11-29T00:20:34.000Z (12 months ago)
- Last Synced: 2024-10-30T05:56:19.373Z (14 days ago)
- Topics: ai, csv, database, machine-learning, nl2sql, nlp, qa, querying-natural-language, question-answering, sql, sql-generation, sql-query, table-qa, tableqa, tabular-data
- Language: Python
- Homepage:
- Size: 28.2 MB
- Stars: 301
- Watchers: 4
- Forks: 47
- Open Issues: 27
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# tableQA
AI Tool for querying natural language on tabular data.Built using QA models from [transformers](https://huggingface.co/transformers/model_doc/bert.html#tfbertforquestionanswering).This work is described in the following paper:
[TableQuery: Querying tabular data with natural language, by Abhijith Neil Abraham, Fariz Rahman and Damanpreet Kaur](https://arxiv.org/abs/2202.00454).
If you use TableQA, please cite the paper.Here is a detailed [blog](https://dev.to/abhijithneilabraham/tableqa-query-your-tabular-data-with-natural-language-39o) to understand how this works.
A tabular data can be:
- Dataframes
- CSV files[![Build Status](https://travis-ci.com/abhijithneilabraham/tableQA.svg?branch=master)](https://travis-ci.com/abhijithneilabraham/tableQA).
[![Gradient](https://assets.paperspace.io/img/gradient-badge.svg)](https://console.paperspace.com/github/abhijithneilabraham/tableQA/blob/master/examples/sample.ipynb).
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1Bgd3L-839NVZiP3QqWfpkYIufQIm4Rar?usp=sharing)#### Features
* Supports detection from multiple csvs (csvs can also be read from Amazon s3)
* Supports FuzzyString implementation. i.e, incomplete column values in query can be automatically detected and filled in the query.
* Supports Databases - SQLite, Postgresql, MySQL, Amazon RDS (Postgresql, MySQL).
* Open-Domain, No training required.
* Add manual schema for customized experience
* Auto-generate schemas in case schema not provided
* Data visualisations.#### Supported operations.
- [X] SELECT
- [X] one column
- [X] multiple columns
- [X] all columns
- [X] aggregate functions
- [X] distinct select
- [X] count-select
- [X] sum-select
- [X] avg-select
- [X] min-select
- [X] max-select
- [X] WHERE
- [X] one condition
- [X] multiple conditions
- [X] operators
- [X] equal operator
- [X] greater-than operator
- [X] less-than operator
- [X] between operator### Configuration:
##### install via pip:
```pip install tableqa```
##### installing from source:
```git clone https://github.com/abhijithneilabraham/tableQA ```
```cd tableqa```
```python setup.py install```
## Quickstart
#### Do sample query
```
from tableqa.agent import Agent
agent=Agent(df) #input your dataframe
response=agent.query_db("Your question here")
print(response)
```#### Get an SQL query from the question
```
sql=agent.get_query("Your question here")
print(sql) #returns an sql query
```#### Adding Manual schema
##### Schema Format:
```
{
"name": DATABASE NAME,
"keywords":[DATABASE KEYWORDS],
"columns":
[
{
"name": COLUMN 1 NAME,
"mapping":{
CATEGORY 1: [CATEGORY 1 KEYWORDS],
CATEGORY 2: [CATEGORY 2 KEYWORDS]
}},
{
"name": COLUMN 2 NAME,
"keywords": [COLUMN 2 KEYWORDS]
},
{
"name": "COLUMN 3 NAME",
"keywords": [COLUMN 3 KEYWORDS],
"summable":"True"
}
]
}```
* Mappings are for those columns whose values have only few distinct classes.
* Include only the column names which need to have manual keywords or mappings.Rest will will be autogenerated.
* ```summable``` is included for Numeric Type columns whose values are already count representations. Eg. ```Death Count,Cases``` etc. consists values which already represent a count.Example (with manual schema):
##### Database query
* Default Database - SQLite (File-based database, does not require creation of a separate connection.)
```
from tableqa.agent import Agent
agent=Agent(df,schema) #pass the dataframe and schema objects
response=agent.query_db("how many people died of stomach cancer in 2011")
print(response)
#Response =[(22,)]
```* To use PostgreSQL, you must have a postgresql server installed and running on your local. To download postgresql, visit the [page](https://www.postgresql.org).
```
from tableqa.agent import Agent
agent = Agent(df, schema_file, 'postgres', username='username', password='password', database='DBname', host='localhost', port=5432, aws_db=False)
response=agent.query_db("how many people died of stomach cancer in 2011")
print(response)
#Response =[(22,)]
```* To use MySQL, you must have a mysql server installed and running on your local. To download mysql, visit the [page](https://www.mysql.com/downloads/).
```
from tableqa.agent import Agent
agent = Agent(df, schema_file, 'mysql', username='username', password='password', database='DBname', host='localhost', port=5432, aws_db=False)
response=agent.query_db("how many people died of stomach cancer in 2011")
print(response)
#Response =[(22,)]```
* To use PostgreSQL or MySQL on Amazon RDS, you must create a database on Amazon RDS. The RDS must be in public subnet with security groups allowing connections from outside of AWS.
Refer to step 1 in the [document](https://aws.amazon.com/getting-started/hands-on/create-mysql-db/) to create a mysql db instance on Amazon RDS. Same steps can be followed for creating a PostgreSQL db instance by selecting PostgreSQL in the Engine tab. Obtain the username, password, database, endpoint, and port from your database connection details on Amazon RDS.
```
from tableqa.agent import Agent
agent = Agent(df, schema_file, 'postgres', username='Master username', password='Master password', database='DB name', host='Endpoint', port='Port', aws_db=True)
response=agent.query_db("how many people died of stomach cancer in 2011")
print(response)
#Response =[(22,)]```
##### SQL query
```
sql=agent.get_query("How many people died of stomach cancer in 2011")
print(sql)
#sql query: SELECT SUM(Death_Count) FROM cancer_death WHERE Cancer_site = "Stomach" AND Year = "2011"
```#### Multiple CSVs
* Pass the absolute path of the directories containing the csvs and schemas respectively. Refer [cleaned_data](tableqa/cleaned_data) and [schema](tableqa/schema) for examples.
##### Example
* Read CSV and Schema from local machine-
```
csv_path="/content/tableQA/tableqa/cleaned_data"
schema_path="/content/tableQA/tableqa/schema"
agent=Agent(csv_path,schema_path)```
* Read CSV and schema files from Amazon s3 -
1) [Create a bucket](https://docs.aws.amazon.com/AmazonS3/latest/gsg/CreatingABucket.html) on Amazon s3.
2) [Upload objects](https://docs.aws.amazon.com/AmazonS3/latest/gsg/PuttingAnObjectInABucket.html) to the bucket.
3) [Create an IAM user](https://www.atensoftware.com/p90.php?q=309) and provide it access to read files from Amazon s3 storage.
4) Obtain the access key and secret access key for the user and pass it as an argument to the agent.```
csv_path="s3://{bucket}/cleaned_data"
schema_path="s3://{bucket}/schema"
agent = Agent(csv_path, schema_path, aws_s3=True, access_key_id=access_key_id, secret_access_key=secret_access_key)```
#### Join us
Join our workspace:[Slack](https://join.slack.com/t/newworkspace-ehh1873/shared_invite/zt-hp3i6ic7-exMal1I4ZmFMWaHAwXk8HA)