{"id":13546812,"url":"https://github.com/obsei/obsei","last_synced_at":"2026-01-14T09:48:35.066Z","repository":{"id":39587763,"uuid":"307191665","full_name":"obsei/obsei","owner":"obsei","description":"Obsei is a low code AI powered automation tool. It can be used in various business flows like social listening, AI based alerting, brand image analysis, comparative study and more .","archived":false,"fork":false,"pushed_at":"2025-11-16T15:48:38.000Z","size":17081,"stargazers_count":1354,"open_issues_count":36,"forks_count":174,"subscribers_count":28,"default_branch":"master","last_synced_at":"2025-11-16T17:26:06.556Z","etag":null,"topics":["anonymization","artificial-intelligence","business-process-automation","customer-engagement","customer-support","issue-tracking-system","low-code","lowcode","natural-language-processing","nlp","process-automation","python","sentiment-analysis","social-listening","social-network-analysis","text-analysis","text-analytics","text-classification","workflow","workflow-automation"],"latest_commit_sha":null,"homepage":"https://obsei.com/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/obsei.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":"CITATION.cff","codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":"CONTRIBUTOR_LICENSE_AGREEMENT.md"}},"created_at":"2020-10-25T21:00:48.000Z","updated_at":"2025-11-15T15:51:37.000Z","dependencies_parsed_at":"2023-09-27T10:46:14.182Z","dependency_job_id":"43bae462-3a1f-4255-be98-4e8d2f58eb1c","html_url":"https://github.com/obsei/obsei","commit_stats":{"total_commits":402,"total_committers":15,"mean_commits":26.8,"dds":0.09452736318407962,"last_synced_commit":"2cd0ff2e8e3719f94f713ca38b5569e24e12bd2e"},"previous_names":[],"tags_count":14,"template":false,"template_full_name":null,"purl":"pkg:github/obsei/obsei","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/obsei%2Fobsei","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/obsei%2Fobsei/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/obsei%2Fobsei/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/obsei%2Fobsei/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/obsei","download_url":"https://codeload.github.com/obsei/obsei/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/obsei%2Fobsei/sbom","scorecard":{"id":701148,"data":{"date":"2025-08-11","repo":{"name":"github.com/obsei/obsei","commit":"cb61269ef91647e5334f0b67aaa0467802c5369a"},"scorecard":{"version":"v5.2.1-40-gf6ed084d","commit":"f6ed084d17c9236477efd66e5b258b9d4cc7b389"},"score":4.1,"checks":[{"name":"Dangerous-Workflow","score":10,"reason":"no dangerous workflow patterns detected","details":null,"documentation":{"short":"Determines if the project's GitHub Action workflows avoid dangerous patterns.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#dangerous-workflow"}},{"name":"Code-Review","score":3,"reason":"Found 5/15 approved changesets -- score normalized to 3","details":null,"documentation":{"short":"Determines if the project requires human code review before pull requests (aka merge requests) are merged.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#code-review"}},{"name":"Maintained","score":0,"reason":"0 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 0","details":null,"documentation":{"short":"Determines if the project is \"actively maintained\".","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#maintained"}},{"name":"Security-Policy","score":9,"reason":"security policy file detected","details":["Info: security policy file detected: SECURITY.md:1","Info: Found linked content: SECURITY.md:1","Warn: One or no descriptive hints of disclosure, vulnerability, and/or timelines in security policy","Info: Found text in security policy: SECURITY.md:1"],"documentation":{"short":"Determines if the project has published a security policy.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#security-policy"}},{"name":"Token-Permissions","score":0,"reason":"detected GitHub workflow tokens with excessive permissions","details":["Warn: no topLevel permission defined: .github/workflows/build.yml:1","Warn: no topLevel permission defined: .github/workflows/pypi_publish.yml:1","Warn: no topLevel permission defined: .github/workflows/release_draft.yml:1","Warn: no topLevel permission defined: .github/workflows/sdk_docker_publish.yml:1","Warn: no topLevel permission defined: .github/workflows/ui_docker_publish.yml:1","Info: no jobLevel write permissions found"],"documentation":{"short":"Determines if the project's workflows follow the principle of least privilege.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#token-permissions"}},{"name":"CII-Best-Practices","score":0,"reason":"no effort to earn an OpenSSF best practices badge detected","details":null,"documentation":{"short":"Determines if the project has an OpenSSF (formerly CII) Best Practices Badge.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#cii-best-practices"}},{"name":"Binary-Artifacts","score":10,"reason":"no binaries found in the repo","details":null,"documentation":{"short":"Determines if the project has generated executable (binary) artifacts in the source repository.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#binary-artifacts"}},{"name":"License","score":10,"reason":"license file detected","details":["Info: project has a license file: LICENSE:0","Info: FSF or OSI recognized license: Apache License 2.0: LICENSE:0"],"documentation":{"short":"Determines if the project has defined a license.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#license"}},{"name":"Fuzzing","score":0,"reason":"project is not fuzzed","details":["Warn: no fuzzer integrations found"],"documentation":{"short":"Determines if the project uses fuzzing.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#fuzzing"}},{"name":"Signed-Releases","score":-1,"reason":"no releases found","details":null,"documentation":{"short":"Determines if the project cryptographically signs release artifacts.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#signed-releases"}},{"name":"Packaging","score":-1,"reason":"packaging workflow not detected","details":["Warn: no GitHub/GitLab publishing workflow detected."],"documentation":{"short":"Determines if the project is published as a package that others can easily download, install, easily update, and uninstall.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#packaging"}},{"name":"Branch-Protection","score":-1,"reason":"internal error: error during branchesHandler.setup: internal error: githubv4.Query: Resource not accessible by integration","details":null,"documentation":{"short":"Determines if the default and release branches are protected with GitHub's branch protection settings.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#branch-protection"}},{"name":"Pinned-Dependencies","score":0,"reason":"dependency not pinned by hash detected -- score normalized to 0","details":["Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/build.yml:16: update your workflow using https://app.stepsecurity.io/secureworkflow/obsei/obsei/build.yml/master?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/build.yml:17: update your workflow using https://app.stepsecurity.io/secureworkflow/obsei/obsei/build.yml/master?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/build.yml:37: update your workflow using https://app.stepsecurity.io/secureworkflow/obsei/obsei/build.yml/master?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/build.yml:39: update your workflow using https://app.stepsecurity.io/secureworkflow/obsei/obsei/build.yml/master?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/pypi_publish.yml:16: update your workflow using https://app.stepsecurity.io/secureworkflow/obsei/obsei/pypi_publish.yml/master?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/pypi_publish.yml:19: update your workflow using https://app.stepsecurity.io/secureworkflow/obsei/obsei/pypi_publish.yml/master?enable=pin","Warn: third-party GitHubAction not pinned by hash: .github/workflows/release_draft.yml:11: update your workflow using https://app.stepsecurity.io/secureworkflow/obsei/obsei/release_draft.yml/master?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/sdk_docker_publish.yml:19: update your workflow using https://app.stepsecurity.io/secureworkflow/obsei/obsei/sdk_docker_publish.yml/master?enable=pin","Warn: third-party GitHubAction not pinned by hash: .github/workflows/sdk_docker_publish.yml:23: update your workflow using https://app.stepsecurity.io/secureworkflow/obsei/obsei/sdk_docker_publish.yml/master?enable=pin","Warn: third-party GitHubAction not pinned by hash: .github/workflows/sdk_docker_publish.yml:28: update your workflow using https://app.stepsecurity.io/secureworkflow/obsei/obsei/sdk_docker_publish.yml/master?enable=pin","Warn: third-party GitHubAction not pinned by hash: .github/workflows/sdk_docker_publish.yml:31: update your workflow using https://app.stepsecurity.io/secureworkflow/obsei/obsei/sdk_docker_publish.yml/master?enable=pin","Warn: third-party GitHubAction not pinned by hash: .github/workflows/sdk_docker_publish.yml:35: update your workflow using https://app.stepsecurity.io/secureworkflow/obsei/obsei/sdk_docker_publish.yml/master?enable=pin","Warn: third-party GitHubAction not pinned by hash: .github/workflows/sdk_docker_publish.yml:41: update your workflow using https://app.stepsecurity.io/secureworkflow/obsei/obsei/sdk_docker_publish.yml/master?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/ui_docker_publish.yml:19: update your workflow using https://app.stepsecurity.io/secureworkflow/obsei/obsei/ui_docker_publish.yml/master?enable=pin","Warn: third-party GitHubAction not pinned by hash: .github/workflows/ui_docker_publish.yml:23: update your workflow using https://app.stepsecurity.io/secureworkflow/obsei/obsei/ui_docker_publish.yml/master?enable=pin","Warn: third-party GitHubAction not pinned by hash: .github/workflows/ui_docker_publish.yml:28: update your workflow using https://app.stepsecurity.io/secureworkflow/obsei/obsei/ui_docker_publish.yml/master?enable=pin","Warn: third-party GitHubAction not pinned by hash: .github/workflows/ui_docker_publish.yml:31: update your workflow using https://app.stepsecurity.io/secureworkflow/obsei/obsei/ui_docker_publish.yml/master?enable=pin","Warn: third-party GitHubAction not pinned by hash: .github/workflows/ui_docker_publish.yml:35: update your workflow using https://app.stepsecurity.io/secureworkflow/obsei/obsei/ui_docker_publish.yml/master?enable=pin","Warn: third-party GitHubAction not pinned by hash: .github/workflows/ui_docker_publish.yml:41: update your workflow using https://app.stepsecurity.io/secureworkflow/obsei/obsei/ui_docker_publish.yml/master?enable=pin","Warn: containerImage not pinned by hash: Dockerfile:2: pin your Docker image by updating python:3.10-slim-bullseye to python:3.10-slim-bullseye@sha256:f1fb49e4d5501ac93d0ca519fb7ee6250842245aba8612926a46a0832a1ed089","Warn: containerImage not pinned by hash: sample-ui/Dockerfile:1: pin your Docker image by updating python:3.10-slim-bullseye to python:3.10-slim-bullseye@sha256:f1fb49e4d5501ac93d0ca519fb7ee6250842245aba8612926a46a0832a1ed089","Warn: pipCommand not pinned by hash: Dockerfile:25","Warn: pipCommand not pinned by hash: Dockerfile:32","Warn: pipCommand not pinned by hash: sample-ui/Dockerfile:14","Warn: pipCommand not pinned by hash: sample-ui/Dockerfile:15","Warn: pipCommand not pinned by hash: .github/workflows/build.yml:22","Warn: pipCommand not pinned by hash: .github/workflows/build.yml:24","Warn: pipCommand not pinned by hash: .github/workflows/build.yml:45","Warn: pipCommand not pinned by hash: .github/workflows/build.yml:46","Warn: pipCommand not pinned by hash: .github/workflows/build.yml:47","Warn: pipCommand not pinned by hash: .github/workflows/pypi_publish.yml:25","Warn: pipCommand not pinned by hash: .github/workflows/pypi_publish.yml:26","Info:   0 out of   8 GitHub-owned GitHubAction dependencies pinned","Info:   0 out of  11 third-party GitHubAction dependencies pinned","Info:   0 out of   2 containerImage dependencies pinned","Info:   0 out of  11 pipCommand dependencies pinned"],"documentation":{"short":"Determines if the project has declared and pinned the dependencies of its build process.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#pinned-dependencies"}},{"name":"Vulnerabilities","score":4,"reason":"6 existing vulnerabilities detected","details":["Warn: Project is vulnerable to: PYSEC-2024-153 / GHSA-rxff-vr5r-8cj5","Warn: Project is vulnerable to: GHSA-753j-mpmx-qq6g","Warn: Project is vulnerable to: GHSA-7cx3-6m66-7c5m","Warn: Project is vulnerable to: GHSA-8w49-h785-mj3c","Warn: Project is vulnerable to: GHSA-qppv-j76h-2rpx","Warn: Project is vulnerable to: GHSA-w235-7p84-xx57"],"documentation":{"short":"Determines if the project has open, known unfixed vulnerabilities.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#vulnerabilities"}},{"name":"SAST","score":0,"reason":"SAST tool is not run on all commits -- score normalized to 0","details":["Warn: 0 commits out of 21 are checked with a SAST tool"],"documentation":{"short":"Determines if the project uses static code analysis.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#sast"}}]},"last_synced_at":"2025-08-22T05:14:31.676Z","repository_id":39587763,"created_at":"2025-08-22T05:14:31.676Z","updated_at":"2025-08-22T05:14:31.676Z"},"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28416120,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-14T08:38:59.149Z","status":"ssl_error","status_checked_at":"2026-01-14T08:38:43.588Z","response_time":107,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.6:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["anonymization","artificial-intelligence","business-process-automation","customer-engagement","customer-support","issue-tracking-system","low-code","lowcode","natural-language-processing","nlp","process-automation","python","sentiment-analysis","social-listening","social-network-analysis","text-analysis","text-analytics","text-classification","workflow","workflow-automation"],"created_at":"2024-08-01T12:00:45.395Z","updated_at":"2026-01-14T09:48:35.050Z","avatar_url":"https://github.com/obsei.png","language":"Python","readme":"\u003cp align=\"center\"\u003e\n    \u003cimg src=\"https://raw.githubusercontent.com/obsei/obsei-resources/master/images/obsei-flyer.png\" /\u003e\n\u003c/p\u003e\n\n---\n\u003cp align=\"center\"\u003e\n    \u003ca href=\"https://www.oraika.com\"\u003e\n            \u003cimg src=\"https://static.wixstatic.com/media/59bc4e_971f153f107e48c7912b9b2d4cd1b1a4~mv2.png/v1/fill/w_177,h_49,al_c,q_85,usm_0.66_1.00_0.01,enc_auto/3_edited.png\" /\u003e\n    \u003c/a\u003e\n\u003c/p\u003e\n\u003cp align=\"center\"\u003e\n    \u003ca href=\"https://github.com/obsei/obsei/actions\"\u003e\n        \u003cimg alt=\"Test\" src=\"https://github.com/obsei/obsei/workflows/CI/badge.svg?branch=master\"\u003e\n    \u003c/a\u003e\n    \u003ca href=\"https://github.com/obsei/obsei/blob/master/LICENSE\"\u003e\n        \u003cimg alt=\"License\" src=\"https://img.shields.io/pypi/l/obsei\"\u003e\n    \u003c/a\u003e\n    \u003ca href=\"https://pypi.org/project/obsei\"\u003e\n        \u003cimg src=\"https://img.shields.io/pypi/pyversions/obsei\" alt=\"PyPI - Python Version\" /\u003e\n    \u003c/a\u003e\n    \u003ca href=\"https://pypi.org/project/obsei/\"\u003e\n        \u003cimg alt=\"Release\" src=\"https://img.shields.io/pypi/v/obsei\"\u003e\n    \u003c/a\u003e\n    \u003ca href=\"https://pepy.tech/project/obsei\"\u003e\n        \u003cimg src=\"https://pepy.tech/badge/obsei/month\" alt=\"Downloads\" /\u003e\n    \u003c/a\u003e\n    \u003ca href=\"https://huggingface.co/spaces/obsei/obsei-demo\"\u003e\n        \u003cimg src=\"https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue\" alt=\"HF Spaces\" /\u003e\n    \u003c/a\u003e\n    \u003ca href=\"https://github.com/obsei/obsei/commits/master\"\u003e\n        \u003cimg alt=\"Last commit\" src=\"https://img.shields.io/github/last-commit/obsei/obsei\"\u003e\n    \u003c/a\u003e\n    \u003ca href=\"https://github.com/obsei/obsei\"\u003e\n        \u003cimg alt=\"Github stars\" src=\"https://img.shields.io/github/stars/obsei/obsei?style=social\"\u003e\n    \u003c/a\u003e\n    \u003ca href=\"https://www.youtube.com/channel/UCqdvgro1BzU13tkAfX3jCJA\"\u003e\n        \u003cimg alt=\"YouTube Channel Subscribers\" src=\"https://img.shields.io/youtube/channel/subscribers/UCqdvgro1BzU13tkAfX3jCJA?style=social\"\u003e\n    \u003c/a\u003e\n    \u003ca href=\"https://join.slack.com/t/obsei-community/shared_invite/zt-r0wnuz02-FAkAmhTAUoc6pD4SLB9Ikg\"\u003e\n        \u003cimg src=\"https://raw.githubusercontent.com/obsei/obsei-resources/master/logos/Slack_join.svg\" height=\"30\"\u003e\n    \u003c/a\u003e\n    \u003ca href=\"https://twitter.com/ObseiAI\"\u003e\n        \u003cimg src=\"https://img.shields.io/twitter/follow/ObseiAI?style=social\"\u003e\n    \u003c/a\u003e\n\u003c/p\u003e\n\n---\n\n![](https://raw.githubusercontent.com/obsei/obsei-resources/master/gifs/obsei_flow.gif)\n\n---\n\n\u003cspan style=\"color:red\"\u003e\n\u003cb\u003eNote\u003c/b\u003e: Obsei is still in alpha stage hence carefully use it in Production. Also, as it is constantly undergoing development hence master branch may contain many breaking changes. Please use released version.\n\u003c/span\u003e\n\n---\n\n**Obsei** (pronounced \"Ob see\" | /əb-'sē/) is an open-source, low-code, AI powered automation tool. _Obsei_ consists of -\n\n- **Observer**: Collect unstructured data from various sources like tweets from Twitter, Subreddit comments on Reddit, page post's comments from Facebook, App Stores reviews, Google reviews, Amazon reviews, News, Website, etc.\n- **Analyzer**: Analyze unstructured data collected with various AI tasks like classification, sentiment analysis, translation, PII, etc.\n- **Informer**: Send analyzed data to various destinations like ticketing platforms, data storage, dataframe, etc so that the user can take further actions and perform analysis on the data.\n\nAll the Observers can store their state in databases (Sqlite, Postgres, MySQL, etc.), making Obsei suitable for scheduled jobs or serverless applications.\n\n![Obsei diagram](https://raw.githubusercontent.com/obsei/obsei-resources/master/images/Obsei_diagram.png)\n\n### Future direction -\n\n- Text, Image, Audio, Documents and Video oriented workflows\n- Collect data from every possible private and public channels\n- Add every possible workflow to an AI downstream application to automate manual cognitive workflows\n\n## Use cases\n\n_Obsei_ use cases are following, but not limited to -\n\n- Social listening: Listening about social media posts, comments, customer feedback, etc.\n- Alerting/Notification: To get auto-alerts for events such as customer complaints, qualified sales leads, etc.\n- Automatic customer issue creation based on customer complaints on Social Media, Email, etc.\n- Automatic assignment of proper tags to tickets based content of customer complaint for example login issue, sign up issue, delivery issue, etc.\n- Extraction of deeper insight from feedbacks on various platforms\n- Market research\n- Creation of dataset for various AI tasks\n- Many more based on creativity 💡\n\n## Installation\n\n### Prerequisite\n\nInstall the following (if not present already) -\n\n- Install [Python 3.7+](https://www.python.org/downloads/)\n- Install [PIP](https://pip.pypa.io/en/stable/installing/)\n\n### Install Obsei\n\nYou can install Obsei either via PIP or Conda based on your preference.\nTo install latest released version -\n\n```shell\npip install obsei[all]\n```\n\nInstall from master branch (if you want to try the latest features) -\n\n```shell\ngit clone https://github.com/obsei/obsei.git\ncd obsei\npip install --editable .[all]\n```\n  \nNote: `all` option will install all the dependencies which might not be needed for your workflow, alternatively \nfollowing options are available to install minimal dependencies as per need -\n - `pip install obsei[source]`: To install dependencies related to all observers\n - `pip install obsei[sink]`: To install dependencies related to all informers\n - `pip install obsei[analyzer]`:  To install dependencies related to all analyzers, it will install pytorch as well\n - `pip install obsei[twitter-api]`: To install dependencies related to Twitter observer\n - `pip install obsei[google-play-scraper]`: To install dependencies related to Play Store review scrapper observer\n - `pip install obsei[google-play-api]`: To install dependencies related to Google official play store review API based observer\n - `pip install obsei[app-store-scraper]`: To install dependencies related to Apple App Store review scrapper observer\n - `pip install obsei[reddit-scraper]`: To install dependencies related to Reddit post and comment scrapper observer\n - `pip install obsei[reddit-api]`: To install dependencies related to Reddit official api based observer\n - `pip install obsei[pandas]`: To install dependencies related to TSV/CSV/Pandas based observer and informer\n - `pip install obsei[google-news-scraper]`: To install dependencies related to Google news scrapper observer\n - `pip install obsei[facebook-api]`: To install dependencies related to Facebook official page post and comments api based observer\n - `pip install obsei[atlassian-api]`: To install dependencies related to Jira official api based informer\n - `pip install obsei[elasticsearch]`: To install dependencies related to elasticsearch informer\n - `pip install obsei[slack-api]`:To install dependencies related to Slack official api based informer\n\nYou can also mix multiple dependencies together in single installation command. For example to install dependencies \nTwitter observer, all analyzer, and Slack informer use following command -\n```shell\npip install obsei[twitter-api, analyzer, slack-api]\n```\n\n\n## How to use\n\nExpand the following steps and create a workflow -\n\n\u003cdetails\u003e\u003csummary\u003e\u003cb\u003eStep 1: Configure Source/Observer\u003c/b\u003e\u003c/summary\u003e\n\n\u003ctable \u003e\u003ctbody \u003e\u003ctr\u003e\u003c/tr\u003e\u003ctr\u003e\n\u003ctd\u003e\u003cdetails \u003e\u003csummary\u003e\u003cimg style=\"vertical-align:middle;margin:2px 10px\" src=\"https://raw.githubusercontent.com/obsei/obsei-resources/master/logos/twitter.png\" width=\"20\" height=\"20\"\u003e\u003cb\u003eTwitter\u003c/b\u003e\u003c/summary\u003e\u003chr\u003e\n\n```python\nfrom obsei.source.twitter_source import TwitterCredentials, TwitterSource, TwitterSourceConfig\n\n# initialize twitter source config\nsource_config = TwitterSourceConfig(\n   keywords=[\"issue\"], # Keywords, @user or #hashtags\n   lookup_period=\"1h\", # Lookup period from current time, format: `\u003cnumber\u003e\u003cd|h|m\u003e` (day|hour|minute)\n   cred_info=TwitterCredentials(\n       # Enter your twitter consumer key and secret. Get it from https://developer.twitter.com/en/apply-for-access\n       consumer_key=\"\u003ctwitter_consumer_key\u003e\",\n       consumer_secret=\"\u003ctwitter_consumer_secret\u003e\",\n       bearer_token='\u003cENTER BEARER TOKEN\u003e',\n   )\n)\n\n# initialize tweets retriever\nsource = TwitterSource()\n```\n\n\u003c/details\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd\u003e\u003cdetails \u003e\u003csummary\u003e\u003cimg style=\"vertical-align:middle;margin:2px 10px\" src=\"https://raw.githubusercontent.com/obsei/obsei-resources/master/logos/Youtube.png\" width=\"20\" height=\"20\"\u003e\u003cb\u003eYoutube Scrapper\u003c/b\u003e\u003c/summary\u003e\u003chr\u003e\n\n```python\nfrom obsei.source.youtube_scrapper import YoutubeScrapperSource, YoutubeScrapperConfig\n\n# initialize Youtube source config\nsource_config = YoutubeScrapperConfig(\n    video_url=\"https://www.youtube.com/watch?v=uZfns0JIlFk\", # Youtube video URL\n    fetch_replies=True, # Fetch replies to comments\n    max_comments=10, # Total number of comments and replies to fetch\n    lookup_period=\"1Y\", # Lookup period from current time, format: `\u003cnumber\u003e\u003cd|h|m|M|Y\u003e` (day|hour|minute|month|year)\n)\n\n# initialize Youtube comments retriever\nsource = YoutubeScrapperSource()\n```\n\n\u003c/details\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd\u003e\u003cdetails \u003e\u003csummary\u003e\u003cimg style=\"vertical-align:middle;margin:2px 10px\" src=\"https://raw.githubusercontent.com/obsei/obsei-resources/master/logos/facebook.png\" width=\"20\" height=\"20\"\u003e\u003cb\u003eFacebook\u003c/b\u003e\u003c/summary\u003e\u003chr\u003e\n\n```python\nfrom obsei.source.facebook_source import FacebookCredentials, FacebookSource, FacebookSourceConfig\n\n# initialize facebook source config\nsource_config = FacebookSourceConfig(\n   page_id=\"110844591144719\", # Facebook page id, for example this one for Obsei\n   lookup_period=\"1h\", # Lookup period from current time, format: `\u003cnumber\u003e\u003cd|h|m\u003e` (day|hour|minute)\n   cred_info=FacebookCredentials(\n       # Enter your facebook app_id, app_secret and long_term_token. Get it from https://developers.facebook.com/apps/\n       app_id=\"\u003cfacebook_app_id\u003e\",\n       app_secret=\"\u003cfacebook_app_secret\u003e\",\n       long_term_token=\"\u003cfacebook_long_term_token\u003e\",\n   )\n)\n\n# initialize facebook post comments retriever\nsource = FacebookSource()\n```\n\n\u003c/details\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd\u003e\u003cdetails \u003e\u003csummary\u003e\u003cimg style=\"vertical-align:middle;margin:2px 10px\" src=\"https://raw.githubusercontent.com/obsei/obsei-resources/master/logos/gmail.png\" width=\"20\" height=\"20\"\u003e\u003cb\u003eEmail\u003c/b\u003e\u003c/summary\u003e\u003chr\u003e\n\n```python\nfrom obsei.source.email_source import EmailConfig, EmailCredInfo, EmailSource\n\n# initialize email source config\nsource_config = EmailConfig(\n   # List of IMAP servers for most commonly used email providers\n   # https://www.systoolsgroup.com/imap/\n   # Also, if you're using a Gmail account then make sure you allow less secure apps on your account -\n   # https://myaccount.google.com/lesssecureapps?pli=1\n   # Also enable IMAP access -\n   # https://mail.google.com/mail/u/0/#settings/fwdandpop\n   imap_server=\"imap.gmail.com\", # Enter IMAP server\n   cred_info=EmailCredInfo(\n       # Enter your email account username and password\n       username=\"\u003cemail_username\u003e\",\n       password=\"\u003cemail_password\u003e\"\n   ),\n   lookup_period=\"1h\" # Lookup period from current time, format: `\u003cnumber\u003e\u003cd|h|m\u003e` (day|hour|minute)\n)\n\n# initialize email retriever\nsource = EmailSource()\n```\n\n\u003c/details\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd\u003e\u003cdetails \u003e\u003csummary\u003e\u003cimg style=\"vertical-align:middle;margin:2px 10px\" src=\"https://raw.githubusercontent.com/obsei/obsei-resources/master/logos/google_maps.png\" width=\"20\" height=\"20\"\u003e\u003cb\u003eGoogle Maps Reviews Scrapper\u003c/b\u003e\u003c/summary\u003e\u003chr\u003e\n\n```python\nfrom obsei.source.google_maps_reviews import OSGoogleMapsReviewsSource, OSGoogleMapsReviewsConfig\n\n# initialize Outscrapper Maps review source config\nsource_config = OSGoogleMapsReviewsConfig(\n   # Collect API key from https://outscraper.com/\n   api_key=\"\u003cEnter Your API Key\u003e\",\n   # Enter Google Maps link or place id\n   # For example below is for the \"Taj Mahal\"\n   queries=[\"https://www.google.co.in/maps/place/Taj+Mahal/@27.1751496,78.0399535,17z/data=!4m5!3m4!1s0x39747121d702ff6d:0xdd2ae4803f767dde!8m2!3d27.1751448!4d78.0421422\"],\n   number_of_reviews=10,\n)\n\n\n# initialize Outscrapper Maps review retriever\nsource = OSGoogleMapsReviewsSource()\n```\n\n\u003c/details\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd\u003e\u003cdetails \u003e\u003csummary\u003e\u003cimg style=\"vertical-align:middle;margin:2px 10px\" src=\"https://raw.githubusercontent.com/obsei/obsei-resources/master/logos/appstore.png\" width=\"20\" height=\"20\"\u003e\u003cb\u003eAppStore Reviews Scrapper\u003c/b\u003e\u003c/summary\u003e\u003chr\u003e\n\n```python\nfrom obsei.source.appstore_scrapper import AppStoreScrapperConfig, AppStoreScrapperSource\n\n# initialize app store source config\nsource_config = AppStoreScrapperConfig(\n   # Need two parameters app_id and country.\n   # `app_id` can be found at the end of the url of app in app store.\n   # For example - https://apps.apple.com/us/app/xcode/id497799835\n   # `310633997` is the app_id for xcode and `us` is country.\n   countries=[\"us\"],\n   app_id=\"310633997\",\n   lookup_period=\"1h\" # Lookup period from current time, format: `\u003cnumber\u003e\u003cd|h|m\u003e` (day|hour|minute)\n)\n\n\n# initialize app store reviews retriever\nsource = AppStoreScrapperSource()\n```\n\n\u003c/details\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd\u003e\u003cdetails \u003e\u003csummary\u003e\u003cimg style=\"vertical-align:middle;margin:2px 10px\" src=\"https://raw.githubusercontent.com/obsei/obsei-resources/master/logos/playstore.png\" width=\"20\" height=\"20\"\u003e\u003cb\u003ePlay Store Reviews Scrapper\u003c/b\u003e\u003c/summary\u003e\u003chr\u003e\n\n```python\nfrom obsei.source.playstore_scrapper import PlayStoreScrapperConfig, PlayStoreScrapperSource\n\n# initialize play store source config\nsource_config = PlayStoreScrapperConfig(\n   # Need two parameters package_name and country.\n   # `package_name` can be found at the end of the url of app in play store.\n   # For example - https://play.google.com/store/apps/details?id=com.google.android.gm\u0026hl=en\u0026gl=US\n   # `com.google.android.gm` is the package_name for xcode and `us` is country.\n   countries=[\"us\"],\n   package_name=\"com.google.android.gm\",\n   lookup_period=\"1h\" # Lookup period from current time, format: `\u003cnumber\u003e\u003cd|h|m\u003e` (day|hour|minute)\n)\n\n# initialize play store reviews retriever\nsource = PlayStoreScrapperSource()\n```\n\n\u003c/details\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd\u003e\u003cdetails \u003e\u003csummary\u003e\u003cimg style=\"vertical-align:middle;margin:2px 10px\" src=\"https://raw.githubusercontent.com/obsei/obsei-resources/master/logos/reddit.png\" width=\"20\" height=\"20\"\u003e\u003cb\u003eReddit\u003c/b\u003e\u003c/summary\u003e\u003chr\u003e\n\n```python\nfrom obsei.source.reddit_source import RedditConfig, RedditSource, RedditCredInfo\n\n# initialize reddit source config\nsource_config = RedditConfig(\n   subreddits=[\"wallstreetbets\"], # List of subreddits\n   # Reddit account username and password\n   # You can also enter reddit client_id and client_secret or refresh_token\n   # Create credential at https://www.reddit.com/prefs/apps\n   # Also refer https://praw.readthedocs.io/en/latest/getting_started/authentication.html\n   # Currently Password Flow, Read Only Mode and Saved Refresh Token Mode are supported\n   cred_info=RedditCredInfo(\n       username=\"\u003creddit_username\u003e\",\n       password=\"\u003creddit_password\u003e\"\n   ),\n   lookup_period=\"1h\" # Lookup period from current time, format: `\u003cnumber\u003e\u003cd|h|m\u003e` (day|hour|minute)\n)\n\n# initialize reddit retriever\nsource = RedditSource()\n```\n\n\u003c/details\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd\u003e\u003cdetails \u003e\u003csummary\u003e\u003cimg style=\"vertical-align:middle;margin:2px 10px\" src=\"https://raw.githubusercontent.com/obsei/obsei-resources/master/logos/reddit.png\" width=\"20\" height=\"20\"\u003e\u003cb\u003eReddit Scrapper\u003c/b\u003e\u003c/summary\u003e\u003chr\u003e\n\n\u003ci\u003eNote: Reddit heavily rate limit scrappers, hence use it to fetch small data during long period\u003c/i\u003e\n\n```python\nfrom obsei.source.reddit_scrapper import RedditScrapperConfig, RedditScrapperSource\n\n# initialize reddit scrapper source config\nsource_config = RedditScrapperConfig(\n   # Reddit subreddit, search etc rss url. For proper url refer following link -\n   # Refer https://www.reddit.com/r/pathogendavid/comments/tv8m9/pathogendavids_guide_to_rss_and_reddit/\n   url=\"https://www.reddit.com/r/wallstreetbets/comments/.rss?sort=new\",\n   lookup_period=\"1h\" # Lookup period from current time, format: `\u003cnumber\u003e\u003cd|h|m\u003e` (day|hour|minute)\n)\n\n# initialize reddit retriever\nsource = RedditScrapperSource()\n```\n\n\u003c/details\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd\u003e\u003cdetails \u003e\u003csummary\u003e\u003cimg style=\"vertical-align:middle;margin:2px 10px\" src=\"https://raw.githubusercontent.com/obsei/obsei-resources/master/logos/googlenews.png\" width=\"20\" height=\"20\"\u003e\u003cb\u003eGoogle News\u003c/b\u003e\u003c/summary\u003e\u003chr\u003e\n\n```python\nfrom obsei.source.google_news_source import GoogleNewsConfig, GoogleNewsSource\n\n# initialize Google News source config\nsource_config = GoogleNewsConfig(\n   query='bitcoin',\n   max_results=5,\n   # To fetch full article text enable `fetch_article` flag\n   # By default google news gives title and highlight\n   fetch_article=True,\n   # proxy='http://127.0.0.1:8080'\n)\n\n# initialize Google News retriever\nsource = GoogleNewsSource()\n```\n\n\u003c/details\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd\u003e\u003cdetails \u003e\u003csummary\u003e\u003cimg style=\"vertical-align:middle;margin:2px 10px\" src=\"https://raw.githubusercontent.com/obsei/obsei-resources/master/logos/webcrawler.png\" width=\"20\" height=\"20\"\u003e\u003cb\u003eWeb Crawler\u003c/b\u003e\u003c/summary\u003e\u003chr\u003e\n\n```python\nfrom obsei.source.website_crawler_source import TrafilaturaCrawlerConfig, TrafilaturaCrawlerSource\n\n# initialize website crawler source config\nsource_config = TrafilaturaCrawlerConfig(\n   urls=['https://obsei.github.io/obsei/']\n)\n\n# initialize website text retriever\nsource = TrafilaturaCrawlerSource()\n```\n\n\u003c/details\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd\u003e\u003cdetails \u003e\u003csummary\u003e\u003cimg style=\"vertical-align:middle;margin:2px 10px\" src=\"https://raw.githubusercontent.com/obsei/obsei-resources/master/logos/pandas.svg\" width=\"20\" height=\"20\"\u003e\u003cb\u003ePandas DataFrame\u003c/b\u003e\u003c/summary\u003e\u003chr\u003e\n\n```python\nimport pandas as pd\nfrom obsei.source.pandas_source import PandasSource, PandasSourceConfig\n\n# Initialize your Pandas DataFrame from your sources like csv, excel, sql etc\n# In following example we are reading csv which have two columns title and text\ncsv_file = \"https://raw.githubusercontent.com/deepset-ai/haystack/master/tutorials/small_generator_dataset.csv\"\ndataframe = pd.read_csv(csv_file)\n\n# initialize pandas sink config\nsink_config = PandasSourceConfig(\n   dataframe=dataframe,\n   include_columns=[\"score\"],\n   text_columns=[\"name\", \"degree\"],\n)\n\n# initialize pandas sink\nsink = PandasSource()\n```\n\n\u003c/details\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003c/tbody\u003e\n\u003c/table\u003e\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\u003csummary\u003e\u003cb\u003eStep 2: Configure Analyzer\u003c/b\u003e\u003c/summary\u003e\n\n\u003ci\u003eNote: To run transformers in an offline mode, check [transformers offline mode](https://huggingface.co/transformers/installation.html#offline-mode).\u003c/i\u003e\n\n\u003cp\u003eSome analyzer support GPU and to utilize pass \u003cb\u003edevice\u003c/b\u003e parameter.\nList of possible values of \u003cb\u003edevice\u003c/b\u003e parameter (default value \u003ci\u003eauto\u003c/i\u003e):\n\u003col\u003e\n    \u003cli\u003e \u003cb\u003eauto\u003c/b\u003e: GPU (cuda:0) will be used if available otherwise CPU will be used\n    \u003cli\u003e \u003cb\u003ecpu\u003c/b\u003e: CPU will be used\n    \u003cli\u003e \u003cb\u003ecuda:{id}\u003c/b\u003e - GPU will be used with provided CUDA device id\n\u003c/ol\u003e\n\u003c/p\u003e\n\n\u003ctable \u003e\u003ctbody \u003e\u003ctr\u003e\u003c/tr\u003e\u003ctr\u003e\n\u003ctd\u003e\u003cdetails \u003e\u003csummary\u003e\u003cimg style=\"vertical-align:middle;margin:2px 10px\" src=\"https://raw.githubusercontent.com/obsei/obsei-resources/master/logos/classification.png\" width=\"20\" height=\"20\"\u003e\u003cb\u003eText Classification\u003c/b\u003e\u003c/summary\u003e\u003chr\u003e\n\nText classification: Classify text into user provided categories.\n\n```python\nfrom obsei.analyzer.classification_analyzer import ClassificationAnalyzerConfig, ZeroShotClassificationAnalyzer\n\n# initialize classification analyzer config\n# It can also detect sentiments if \"positive\" and \"negative\" labels are added.\nanalyzer_config=ClassificationAnalyzerConfig(\n   labels=[\"service\", \"delay\", \"performance\"],\n)\n\n# initialize classification analyzer\n# For supported models refer https://huggingface.co/models?filter=zero-shot-classification\ntext_analyzer = ZeroShotClassificationAnalyzer(\n   model_name_or_path=\"typeform/mobilebert-uncased-mnli\",\n   device=\"auto\"\n)\n```\n\n\u003c/details\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd\u003e\u003cdetails \u003e\u003csummary\u003e\u003cimg style=\"vertical-align:middle;margin:2px 10px\" src=\"https://raw.githubusercontent.com/obsei/obsei-resources/master/logos/sentiment.png\" width=\"20\" height=\"20\"\u003e\u003cb\u003eSentiment Analyzer\u003c/b\u003e\u003c/summary\u003e\u003chr\u003e\n\nSentiment Analyzer: Detect the sentiment of the text. Text classification can also perform sentiment analysis but if you don't want to use heavy-duty NLP model then use less resource hungry dictionary based Vader Sentiment detector.\n\n```python\nfrom obsei.analyzer.sentiment_analyzer import VaderSentimentAnalyzer\n\n# Vader does not need any configuration settings\nanalyzer_config=None\n\n# initialize vader sentiment analyzer\ntext_analyzer = VaderSentimentAnalyzer()\n```\n\n\u003c/details\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd\u003e\u003cdetails \u003e\u003csummary\u003e\u003cimg style=\"vertical-align:middle;margin:2px 10px\" src=\"https://raw.githubusercontent.com/obsei/obsei-resources/master/logos/ner.png\" width=\"20\" height=\"20\"\u003e\u003cb\u003eNER Analyzer\u003c/b\u003e\u003c/summary\u003e\u003chr\u003e\n\nNER (Named-Entity Recognition) Analyzer: Extract information and classify named entities mentioned in text into pre-defined categories such as person names, organizations, locations, medical codes, time expressions, quantities, monetary values, percentages, etc\n\n```python\nfrom obsei.analyzer.ner_analyzer import NERAnalyzer\n\n# NER analyzer does not need configuration settings\nanalyzer_config=None\n\n# initialize ner analyzer\n# For supported models refer https://huggingface.co/models?filter=token-classification\ntext_analyzer = NERAnalyzer(\n   model_name_or_path=\"elastic/distilbert-base-cased-finetuned-conll03-english\",\n   device = \"auto\"\n)\n```\n\n\u003c/details\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd\u003e\u003cdetails \u003e\u003csummary\u003e\u003cimg style=\"vertical-align:middle;margin:2px 10px\" src=\"https://raw.githubusercontent.com/obsei/obsei-resources/master/logos/translator.png\" width=\"20\" height=\"20\"\u003e\u003cb\u003eTranslator\u003c/b\u003e\u003c/summary\u003e\u003chr\u003e\n\n```python\nfrom obsei.analyzer.translation_analyzer import TranslationAnalyzer\n\n# Translator does not need analyzer config\nanalyzer_config = None\n\n# initialize translator\n# For supported models refer https://huggingface.co/models?pipeline_tag=translation\nanalyzer = TranslationAnalyzer(\n   model_name_or_path=\"Helsinki-NLP/opus-mt-hi-en\",\n   device = \"auto\"\n)\n```\n\n\u003c/details\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd\u003e\u003cdetails \u003e\u003csummary\u003e\u003cimg style=\"vertical-align:middle;margin:2px 10px\" src=\"https://raw.githubusercontent.com/obsei/obsei-resources/master/logos/pii.png\" width=\"20\" height=\"20\"\u003e\u003cb\u003ePII Anonymizer\u003c/b\u003e\u003c/summary\u003e\u003chr\u003e\n\n```python\nfrom obsei.analyzer.pii_analyzer import PresidioEngineConfig, PresidioModelConfig, \\\n   PresidioPIIAnalyzer, PresidioPIIAnalyzerConfig\n\n# initialize pii analyzer's config\nanalyzer_config = PresidioPIIAnalyzerConfig(\n   # Whether to return only pii analysis or anonymize text\n   analyze_only=False,\n   # Whether to return detail information about anonymization decision\n   return_decision_process=True\n)\n\n# initialize pii analyzer\nanalyzer = PresidioPIIAnalyzer(\n   engine_config=PresidioEngineConfig(\n       # spacy and stanza nlp engines are supported\n       # For more info refer\n       # https://microsoft.github.io/presidio/analyzer/developing_recognizers/#utilize-spacy-or-stanza\n       nlp_engine_name=\"spacy\",\n       # Update desired spacy model and language\n       models=[PresidioModelConfig(model_name=\"en_core_web_lg\", lang_code=\"en\")]\n   )\n)\n```\n\n\u003c/details\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd\u003e\u003cdetails \u003e\u003csummary\u003e\u003cimg style=\"vertical-align:middle;margin:2px 10px\" src=\"https://raw.githubusercontent.com/obsei/obsei-resources/master/logos/dummy.png\" width=\"20\" height=\"20\"\u003e\u003cb\u003eDummy Analyzer\u003c/b\u003e\u003c/summary\u003e\u003chr\u003e\n\nDummy Analyzer: Does nothing. Its simply used for transforming the input (TextPayload) to output (TextPayload) and adding the user supplied dummy data.\n\n```python\nfrom obsei.analyzer.dummy_analyzer import DummyAnalyzer, DummyAnalyzerConfig\n\n# initialize dummy analyzer's configuration settings\nanalyzer_config = DummyAnalyzerConfig()\n\n# initialize dummy analyzer\nanalyzer = DummyAnalyzer()\n```\n\n\u003c/details\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003c/tbody\u003e\n\u003c/table\u003e\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\u003csummary\u003e\u003cb\u003eStep 3: Configure Sink/Informer\u003c/b\u003e\u003c/summary\u003e\n\n\u003ctable \u003e\u003ctbody \u003e\u003ctr\u003e\u003c/tr\u003e\u003ctr\u003e\n\u003ctd\u003e\u003cdetails \u003e\u003csummary\u003e\u003cimg style=\"vertical-align:middle;margin:2px 10px\" src=\"https://raw.githubusercontent.com/obsei/obsei-resources/master/logos/slack.svg\" width=\"25\" height=\"25\"\u003e\u003cb\u003eSlack\u003c/b\u003e\u003c/summary\u003e\u003chr\u003e\n\n```python\nfrom obsei.sink.slack_sink import SlackSink, SlackSinkConfig\n\n# initialize slack sink config\nsink_config = SlackSinkConfig(\n   # Provide slack bot/app token\n   # For more detail refer https://slack.com/intl/en-de/help/articles/215770388-Create-and-regenerate-API-tokens\n   slack_token=\"\u003cSlack_app_token\u003e\",\n   # To get channel id refer https://stackoverflow.com/questions/40940327/what-is-the-simplest-way-to-find-a-slack-team-id-and-a-channel-id\n   channel_id=\"C01LRS6CT9Q\"\n)\n\n# initialize slack sink\nsink = SlackSink()\n```\n\n\u003c/details\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd\u003e\u003cdetails \u003e\u003csummary\u003e\u003cimg style=\"vertical-align:middle;margin:2px 10px\" src=\"https://raw.githubusercontent.com/obsei/obsei-resources/master/logos/zendesk.png\" width=\"20\" height=\"20\"\u003e\u003cb\u003eZendesk\u003c/b\u003e\u003c/summary\u003e\u003chr\u003e\n\n```python\nfrom obsei.sink.zendesk_sink import ZendeskSink, ZendeskSinkConfig, ZendeskCredInfo\n\n# initialize zendesk sink config\nsink_config = ZendeskSinkConfig(\n   # provide zendesk domain\n   domain=\"zendesk.com\",\n   # provide subdomain if you have one\n   subdomain=None,\n   # Enter zendesk user details\n   cred_info=ZendeskCredInfo(\n       email=\"\u003czendesk_user_email\u003e\",\n       password=\"\u003czendesk_password\u003e\"\n   )\n)\n\n# initialize zendesk sink\nsink = ZendeskSink()\n```\n\n\u003c/details\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd\u003e\u003cdetails \u003e\u003csummary\u003e\u003cimg style=\"vertical-align:middle;margin:2px 10px\" src=\"https://raw.githubusercontent.com/obsei/obsei-resources/master/logos/jira.png\" width=\"20\" height=\"20\"\u003e\u003cb\u003eJira\u003c/b\u003e\u003c/summary\u003e\u003chr\u003e\n\n```python\nfrom obsei.sink.jira_sink import JiraSink, JiraSinkConfig\n\n# For testing purpose you can start jira server locally\n# Refer https://developer.atlassian.com/server/framework/atlassian-sdk/atlas-run-standalone/\n\n# initialize Jira sink config\nsink_config = JiraSinkConfig(\n   url=\"http://localhost:2990/jira\", # Jira server url\n    # Jira username \u0026 password for user who have permission to create issue\n   username=\"\u003cusername\u003e\",\n   password=\"\u003cpassword\u003e\",\n   # Which type of issue to be created\n   # For more information refer https://support.atlassian.com/jira-cloud-administration/docs/what-are-issue-types/\n   issue_type={\"name\": \"Task\"},\n   # Under which project issue to be created\n   # For more information refer https://support.atlassian.com/jira-software-cloud/docs/what-is-a-jira-software-project/\n   project={\"key\": \"CUS\"},\n)\n\n# initialize Jira sink\nsink = JiraSink()\n```\n\n\u003c/details\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd\u003e\u003cdetails \u003e\u003csummary\u003e\u003cimg style=\"vertical-align:middle;margin:2px 10px\" src=\"https://raw.githubusercontent.com/obsei/obsei-resources/master/logos/elastic.png\" width=\"20\" height=\"20\"\u003e\u003cb\u003eElasticSearch\u003c/b\u003e\u003c/summary\u003e\u003chr\u003e\n\n```python\nfrom obsei.sink.elasticsearch_sink import ElasticSearchSink, ElasticSearchSinkConfig\n\n# For testing purpose you can start Elasticsearch server locally via docker\n# `docker run -d --name elasticsearch -p 9200:9200 -e \"discovery.type=single-node\" elasticsearch:8.5.0`\n\n# initialize Elasticsearch sink config\nsink_config = ElasticSearchSinkConfig(\n   # Elasticsearch server\n   hosts=\"http://localhost:9200\",\n   # Index name, it will create if not exist\n   index_name=\"test\",\n)\n\n# initialize Elasticsearch sink\nsink = ElasticSearchSink()\n```\n\n\u003c/details\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd\u003e\u003cdetails \u003e\u003csummary\u003e\u003cimg style=\"vertical-align:middle;margin:2px 10px\" src=\"https://raw.githubusercontent.com/obsei/obsei-resources/master/logos/http_api.png\" width=\"20\" height=\"20\"\u003e\u003cb\u003eHttp\u003c/b\u003e\u003c/summary\u003e\u003chr\u003e\n\n```python\nfrom obsei.sink.http_sink import HttpSink, HttpSinkConfig\n\n# For testing purpose you can create mock http server via postman\n# For more details refer https://learning.postman.com/docs/designing-and-developing-your-api/mocking-data/setting-up-mock/\n\n# initialize http sink config (Currently only POST call is supported)\nsink_config = HttpSinkConfig(\n   # provide http server url\n   url=\"https://localhost:8080/api/path\",\n   # Here you can add headers you would like to pass with request\n   headers={\n       \"Content-type\": \"application/json\"\n   }\n)\n\n# To modify or converting the payload, create convertor class\n# Refer obsei.sink.dailyget_sink.PayloadConvertor for example\n\n# initialize http sink\nsink = HttpSink()\n```\n\n\u003c/details\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd\u003e\u003cdetails \u003e\u003csummary\u003e\u003cimg style=\"vertical-align:middle;margin:2px 10px\" src=\"https://raw.githubusercontent.com/obsei/obsei-resources/master/logos/pandas.svg\" width=\"20\" height=\"20\"\u003e\u003cb\u003ePandas DataFrame\u003c/b\u003e\u003c/summary\u003e\u003chr\u003e\n\n```python\nfrom pandas import DataFrame\nfrom obsei.sink.pandas_sink import PandasSink, PandasSinkConfig\n\n# initialize pandas sink config\nsink_config = PandasSinkConfig(\n   dataframe=DataFrame()\n)\n\n# initialize pandas sink\nsink = PandasSink()\n```\n\n\u003c/details\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd\u003e\u003cdetails \u003e\u003csummary\u003e\u003cimg style=\"vertical-align:middle;margin:2px 10px\" src=\"https://raw.githubusercontent.com/obsei/obsei-resources/master/logos/logger.png\" width=\"20\" height=\"20\"\u003e\u003cb\u003eLogger\u003c/b\u003e\u003c/summary\u003e\u003chr\u003e\n\nThis is useful for testing and dry running the pipeline.\n\n```python\nfrom obsei.sink.logger_sink import LoggerSink, LoggerSinkConfig\nimport logging\nimport sys\n\nlogger = logging.getLogger(\"Obsei\")\nlogging.basicConfig(stream=sys.stdout, level=logging.INFO)\n\n# initialize logger sink config\nsink_config = LoggerSinkConfig(\n   logger=logger,\n   level=logging.INFO\n)\n\n# initialize logger sink\nsink = LoggerSink()\n```\n\n\u003c/details\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003c/tbody\u003e\n\u003c/table\u003e\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\u003csummary\u003e\u003cb\u003eStep 4: Join and create workflow\u003c/b\u003e\u003c/summary\u003e\n\n`source` will fetch data from the selected source, then feed it to the `analyzer` for processing, whose output we feed into a `sink` to get notified at that sink.\n\n```python\n# Uncomment if you want logger\n# import logging\n# import sys\n# logger = logging.getLogger(__name__)\n# logging.basicConfig(stream=sys.stdout, level=logging.INFO)\n\n# This will fetch information from configured source ie twitter, app store etc\nsource_response_list = source.lookup(source_config)\n\n# Uncomment if you want to log source response\n# for idx, source_response in enumerate(source_response_list):\n#     logger.info(f\"source_response#'{idx}'='{source_response.__dict__}'\")\n\n# This will execute analyzer (Sentiment, classification etc) on source data with provided analyzer_config\nanalyzer_response_list = text_analyzer.analyze_input(\n    source_response_list=source_response_list,\n    analyzer_config=analyzer_config\n)\n\n# Uncomment if you want to log analyzer response\n# for idx, an_response in enumerate(analyzer_response_list):\n#    logger.info(f\"analyzer_response#'{idx}'='{an_response.__dict__}'\")\n\n# Analyzer output added to segmented_data\n# Uncomment to log it\n# for idx, an_response in enumerate(analyzer_response_list):\n#    logger.info(f\"analyzed_data#'{idx}'='{an_response.segmented_data.__dict__}'\")\n\n# This will send analyzed output to configure sink ie Slack, Zendesk etc\nsink_response_list = sink.send_data(analyzer_response_list, sink_config)\n\n# Uncomment if you want to log sink response\n# for sink_response in sink_response_list:\n#     if sink_response is not None:\n#         logger.info(f\"sink_response='{sink_response}'\")\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\u003csummary\u003e\u003cb\u003eStep 5: Execute workflow\u003c/b\u003e\u003c/summary\u003e\nCopy the code snippets from \u003cb\u003eSteps 1 to 4\u003c/b\u003e into a python file, for example \u003ccode\u003eexample.py\u003c/code\u003e and execute the following command -\n\n```shell\npython example.py\n```\n\n\u003c/details\u003e\n\n## Demo\n\nWe have a minimal [streamlit](https://streamlit.io/) based UI that you can use to test Obsei.\n\n![Screenshot](https://raw.githubusercontent.com/obsei/obsei-resources/master/images/obsei-ui-demo.png)\n\n### Watch UI demo video\n\n[![Introductory and demo video](https://img.youtube.com/vi/GTF-Hy96gvY/2.jpg)](https://www.youtube.com/watch?v=GTF-Hy96gvY)\n\nCheck demo at [![](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/obsei/obsei-demo)\n\n(**Note**: Sometimes the Streamlit demo might not work due to rate limiting, use the docker image (locally) in such cases.)\n\nTo test locally, just run\n\n```\ndocker run -d --name obesi-ui -p 8501:8501 obsei/obsei-ui-demo\n\n# You can find the UI at http://localhost:8501\n```\n\n**To run Obsei workflow easily using GitHub Actions (no sign ups and cloud hosting required), refer to this [repo](https://github.com/obsei/demo-workflow-action)**.\n\n## Companies/Projects using Obsei\n\nHere are some companies/projects (alphabetical order) using Obsei. To add your company/project to the list, please raise a PR or contact us via [email](contact@obsei.com).\n\n- [Oraika](https://www.oraika.com): Contextually understand customer feedback\n- [1Page](https://www.get1page.com/): Giving a better context in meetings and calls\n- [Spacepulse](http://spacepulse.in/): The operating system for spaces\n- [Superblog](https://superblog.ai/): A blazing fast alternative to WordPress and Medium\n- [Zolve](https://zolve.com/): Creating a financial world beyond borders\n- [Utilize](https://www.utilize.app/): No-code app builder for businesses with a deskless workforce\n\n## Articles\n\n\u003ctable\u003e\n\u003cthead\u003e\n\u003ctr class=\"header\"\u003e\n\u003cth\u003eSr. No.\u003c/th\u003e\n\u003cth\u003eTitle\u003c/th\u003e\n\u003cth\u003eAuthor\u003c/th\u003e\n\u003c/tr\u003e\n\u003c/thead\u003e\n\u003ctbody\u003e\n\u003ctr\u003e\n\u003ctd\u003e1\u003c/td\u003e\n\u003ctd\u003e\n    \u003ca href=\"https://reenabapna.medium.com/ai-based-comparative-customer-feedback-analysis-using-deep-learning-models-def0dc77aaee\"\u003eAI based Comparative Customer Feedback Analysis Using Obsei\u003c/a\u003e\n\u003c/td\u003e\n\u003ctd\u003e\n    \u003ca href=\"linkedin.com/in/reena-bapna-66a8691a\"\u003eReena Bapna\u003c/a\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd\u003e2\u003c/td\u003e\n\u003ctd\u003e\n    \u003ca href=\"https://medium.com/mlearning-ai/linkedin-app-user-feedback-analysis-9c9f98464daa\"\u003eLinkedIn App - User Feedback Analysis\u003c/a\u003e\n\u003c/td\u003e\n\u003ctd\u003e\n    \u003ca href=\"http://www.linkedin.com/in/himanshusharmads\"\u003eHimanshu Sharma\u003c/a\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003c/tbody\u003e\n\u003c/table\u003e\n\n## Tutorials\n\n\u003ctable\u003e\n\u003cthead\u003e\n\u003ctr class=\"header\"\u003e\n\u003cth\u003eSr. No.\u003c/th\u003e\n\u003cth\u003eWorkflow\u003c/th\u003e\n\u003cth\u003eColab\u003c/th\u003e\n\u003cth\u003eBinder\u003c/th\u003e\n\u003c/tr\u003e\n\u003c/thead\u003e\n\u003ctbody\u003e\n\u003ctr\u003e\n\u003ctd rowspan=\"2\"\u003e1\u003c/td\u003e\n\u003ctd colspan=\"3\"\u003eObserve app reviews from Google play store, Analyze them by performing text classification and then Inform them on console via logger\u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd\u003ePlayStore Reviews → Classification → Logger\u003c/td\u003e\n\u003ctd\u003e\n    \u003ca href=\"https://colab.research.google.com/github/obsei/obsei/blob/master/tutorials/01_PlayStore_Classification_Logger.ipynb\"\u003e\n        \u003cimg alt=\"Colab\" src=\"https://colab.research.google.com/assets/colab-badge.svg\"\u003e\n    \u003c/a\u003e\n\u003c/td\u003e\n\u003ctd\u003e\n    \u003ca href=\"https://mybinder.org/v2/gh/obsei/obsei/HEAD?filepath=tutorials%2F01_PlayStore_Classification_Logger.ipynb\"\u003e\n        \u003cimg alt=\"Colab\" src=\"https://mybinder.org/badge_logo.svg\"\u003e\n    \u003c/a\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd rowspan=\"2\"\u003e2\u003c/td\u003e\n\u003ctd colspan=\"3\"\u003eObserve app reviews from Google play store, PreProcess text via various text cleaning functions, Analyze them by performing text classification, Inform them to Pandas DataFrame and store resultant CSV to Google Drive\u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd\u003ePlayStore Reviews → PreProcessing → Classification → Pandas DataFrame → CSV in Google Drive\u003c/td\u003e\n\u003ctd\u003e\n    \u003ca href=\"https://colab.research.google.com/github/obsei/obsei/blob/master/tutorials/02_PlayStore_PreProc_Classification_Pandas.ipynb\"\u003e\n        \u003cimg alt=\"Colab\" src=\"https://colab.research.google.com/assets/colab-badge.svg\"\u003e\n    \u003c/a\u003e\n\u003c/td\u003e\n\u003ctd\u003e\n    \u003ca href=\"https://mybinder.org/v2/gh/obsei/obsei/HEAD?filepath=tutorials%2F02_PlayStore_PreProc_Classification_Pandas.ipynb\"\u003e\n        \u003cimg alt=\"Colab\" src=\"https://mybinder.org/badge_logo.svg\"\u003e\n    \u003c/a\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd rowspan=\"2\"\u003e3\u003c/td\u003e\n\u003ctd colspan=\"3\"\u003eObserve app reviews from Apple app store, PreProcess text via various text cleaning function, Analyze them by performing text classification, Inform them to Pandas DataFrame and store resultant CSV to Google Drive\u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd\u003eAppStore Reviews → PreProcessing → Classification → Pandas DataFrame → CSV in Google Drive\u003c/td\u003e\n\u003ctd\u003e\n    \u003ca href=\"https://colab.research.google.com/github/obsei/obsei/blob/master/tutorials/03_AppStore_PreProc_Classification_Pandas.ipynb\"\u003e\n        \u003cimg alt=\"Colab\" src=\"https://colab.research.google.com/assets/colab-badge.svg\"\u003e\n    \u003c/a\u003e\n\u003c/td\u003e\n\u003ctd\u003e\n    \u003ca href=\"https://mybinder.org/v2/gh/obsei/obsei/HEAD?filepath=tutorials%2F03_AppStore_PreProc_Classification_Pandas.ipynb\"\u003e\n        \u003cimg alt=\"Colab\" src=\"https://mybinder.org/badge_logo.svg\"\u003e\n    \u003c/a\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd rowspan=\"2\"\u003e4\u003c/td\u003e\n\u003ctd colspan=\"3\"\u003eObserve news article from Google news, PreProcess text via various text cleaning function, Analyze them via performing text classification while splitting text in small chunks and later computing final inference using given formula\u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd\u003eGoogle News → Text Cleaner → Text Splitter → Classification → Inference Aggregator\u003c/td\u003e\n\u003ctd\u003e\n    \u003ca href=\"https://colab.research.google.com/github/obsei/obsei/blob/master/tutorials/04_GoogleNews_Cleaner_Splitter_Classification_Aggregator.ipynb\"\u003e\n        \u003cimg alt=\"Colab\" src=\"https://colab.research.google.com/assets/colab-badge.svg\"\u003e\n    \u003c/a\u003e\n\u003c/td\u003e\n\u003ctd\u003e\n    \u003ca href=\"https://mybinder.org/v2/gh/obsei/obsei/HEAD?filepath=tutorials%2F04_GoogleNews_Cleaner_Splitter_Classification_Aggregator.ipynb\"\u003e\n        \u003cimg alt=\"Colab\" src=\"https://mybinder.org/badge_logo.svg\"\u003e\n    \u003c/a\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003c/tbody\u003e\n\u003c/table\u003e\n\n\u003cdetails\u003e\u003csummary\u003e\u003cb\u003e💡Tips: Handle large text classification via Obsei\u003c/b\u003e\u003c/summary\u003e\n\n![](https://raw.githubusercontent.com/obsei/obsei-resources/master/gifs/Long_Text_Classification.gif)\n\n\u003c/details\u003e\n\n## Documentation\n\nFor detailed installation instructions, usages and examples, refer to our [documentation](https://obsei.github.io/obsei/).\n\n## Support and Release Matrix\n\n\u003ctable\u003e\n\u003cthead\u003e\n\u003ctr class=\"header\"\u003e\n\u003cth\u003e\u003c/th\u003e\n\u003cth\u003eLinux\u003c/th\u003e\n\u003cth\u003eMac\u003c/th\u003e\n\u003cth\u003eWindows\u003c/th\u003e\n\u003cth\u003eRemark\u003c/th\u003e\n\u003c/tr\u003e\n\u003c/thead\u003e\n\u003ctbody\u003e\n\u003ctr\u003e\n\u003ctd\u003eTests\u003c/td\u003e\n\u003ctd style=\"text-align:center\"\u003e✅\u003c/td\u003e\n\u003ctd style=\"text-align:center\"\u003e✅\u003c/td\u003e\n\u003ctd style=\"text-align:center\"\u003e✅\u003c/td\u003e\n\u003ctd\u003eLow Coverage as difficult to test 3rd party libs\u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd\u003ePIP\u003c/td\u003e\n\u003ctd style=\"text-align:center\"\u003e✅\u003c/td\u003e\n\u003ctd style=\"text-align:center\"\u003e✅\u003c/td\u003e\n\u003ctd style=\"text-align:center\"\u003e✅\u003c/td\u003e\n\u003ctd\u003eFully Supported\u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd\u003eConda\u003c/td\u003e\n\u003ctd style=\"text-align:center\"\u003e❌\u003c/td\u003e\n\u003ctd style=\"text-align:center\"\u003e❌\u003c/td\u003e\n\u003ctd style=\"text-align:center\"\u003e❌\u003c/td\u003e\n\u003ctd\u003eNot Supported\u003c/td\u003e\n\u003c/tr\u003e\n\u003c/tbody\u003e\n\u003c/table\u003e\n\n## Discussion forum\n\nDiscussion about _Obsei_ can be done at [community forum](https://github.com/obsei/obsei/discussions)\n\n## Changelogs\n\nRefer [releases](https://github.com/obsei/obsei/releases) for changelogs\n\n## Security Issue\n\nFor any security issue please contact us via [email](mailto:contact@oraika.com)\n\n## Stargazers over time\n\n[![Stargazers over time](https://starchart.cc/obsei/obsei.svg)](https://starchart.cc/obsei/obsei)\n\n## Maintainers\n\nThis project is being maintained by [Oraika Technologies](https://www.oraika.com). [Lalit Pagaria](https://github.com/lalitpagaria) and [Girish Patel](https://github.com/GirishPatel) are maintainers of this project.\n\n## License\n\n- Copyright holder: [Oraika Technologies](https://www.oraika.com)\n- Overall Apache 2.0 and you can read [License](https://github.com/obsei/obsei/blob/master/LICENSE) file.\n- Multiple other secondary permissive or weak copyleft licenses (LGPL, MIT, BSD etc.) for third-party components refer [Attribution](https://github.com/obsei/obsei/blob/master/ATTRIBUTION.md).\n- To make project more commercial friendly, we void third party components which have strong copyleft licenses (GPL, AGPL etc.) into the project.\n\n## Attribution\n\nThis could not have been possible without these [open source softwares](https://github.com/obsei/obsei/blob/master/ATTRIBUTION.md).\n\n## Contribution\n\nFirst off, thank you for even considering contributing to this package, every contribution big or small is greatly appreciated.\nPlease refer our [Contribution Guideline](https://github.com/obsei/obsei/blob/master/CONTRIBUTING.md) and [Code of Conduct](https://github.com/obsei/obsei/blob/master/CODE_OF_CONDUCT.md).\n\nThanks so much to all our contributors\n\n\u003ca href=\"https://github.com/obsei/obsei/graphs/contributors\"\u003e\n  \u003cimg src=\"https://contrib.rocks/image?repo=obsei/obsei\" /\u003e\n\u003c/a\u003e\n","funding_links":[],"categories":["Python","🧰 NLP Toolkits"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fobsei%2Fobsei","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fobsei%2Fobsei","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fobsei%2Fobsei/lists"}