{"id":13603165,"url":"https://github.com/nlpcloud/nlpcloud-js","last_synced_at":"2026-01-28T20:44:34.699Z","repository":{"id":44442462,"uuid":"333149721","full_name":"nlpcloud/nlpcloud-js","owner":"nlpcloud","description":"NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, paraphrasing, intent classification, product description and ad generation, chatbot, grammar and spelling correction, keywords and keyphrases extraction, text generation, image generation, code generation, and much more...","archived":false,"fork":false,"pushed_at":"2025-01-16T09:13:10.000Z","size":103,"stargazers_count":48,"open_issues_count":1,"forks_count":6,"subscribers_count":5,"default_branch":"master","last_synced_at":"2025-11-18T21:20:06.207Z","etag":null,"topics":["ad-generator","chatbot","code-generation","conversational-ai","embeddings","intent-classification","keywords-extraction","language-detection","machine-translation","ner","nlp","paraphrasing","question-answering","semantic-similarity","sentiment-analysis","text-classification","text-generation","text-summarization","tokenization"],"latest_commit_sha":null,"homepage":"https://nlpcloud.com","language":"JavaScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/nlpcloud.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2021-01-26T16:43:35.000Z","updated_at":"2025-01-16T09:13:12.000Z","dependencies_parsed_at":"2023-02-13T00:01:24.709Z","dependency_job_id":"d0efa298-22ed-4e1a-9a2c-fbb24d605e96","html_url":"https://github.com/nlpcloud/nlpcloud-js","commit_stats":{"total_commits":82,"total_committers":2,"mean_commits":41.0,"dds":"0.012195121951219523","last_synced_commit":"9e0d8cc88119f0e720df4087390f0138fb6f4adc"},"previous_names":[],"tags_count":47,"template":false,"template_full_name":null,"purl":"pkg:github/nlpcloud/nlpcloud-js","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nlpcloud%2Fnlpcloud-js","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nlpcloud%2Fnlpcloud-js/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nlpcloud%2Fnlpcloud-js/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nlpcloud%2Fnlpcloud-js/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/nlpcloud","download_url":"https://codeload.github.com/nlpcloud/nlpcloud-js/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nlpcloud%2Fnlpcloud-js/sbom","scorecard":{"id":690701,"data":{"date":"2025-08-11","repo":{"name":"github.com/nlpcloud/nlpcloud-js","commit":"af790fd091c7b404748967525bf57d474a19a35c"},"scorecard":{"version":"v5.2.1-40-gf6ed084d","commit":"f6ed084d17c9236477efd66e5b258b9d4cc7b389"},"score":2.6,"checks":[{"name":"Binary-Artifacts","score":10,"reason":"no binaries found in the repo","details":null,"documentation":{"short":"Determines if the project has generated executable (binary) artifacts in the source repository.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#binary-artifacts"}},{"name":"Dangerous-Workflow","score":-1,"reason":"no workflows found","details":null,"documentation":{"short":"Determines if the project's GitHub Action workflows avoid dangerous patterns.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#dangerous-workflow"}},{"name":"Code-Review","score":0,"reason":"Found 2/27 approved changesets -- score normalized to 0","details":null,"documentation":{"short":"Determines if the project requires human code review before pull requests (aka merge requests) are merged.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#code-review"}},{"name":"Packaging","score":-1,"reason":"packaging workflow not detected","details":["Warn: no GitHub/GitLab publishing workflow detected."],"documentation":{"short":"Determines if the project is published as a package that others can easily download, install, easily update, and uninstall.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#packaging"}},{"name":"Maintained","score":0,"reason":"0 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 0","details":null,"documentation":{"short":"Determines if the project is \"actively maintained\".","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#maintained"}},{"name":"Token-Permissions","score":-1,"reason":"No tokens found","details":null,"documentation":{"short":"Determines if the project's workflows follow the principle of least privilege.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#token-permissions"}},{"name":"Pinned-Dependencies","score":0,"reason":"dependency not pinned by hash detected -- score normalized to 0","details":["Warn: containerImage not pinned by hash: .devcontainer/Dockerfile:5","Info:   0 out of   1 containerImage dependencies pinned"],"documentation":{"short":"Determines if the project has declared and pinned the dependencies of its build process.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#pinned-dependencies"}},{"name":"CII-Best-Practices","score":0,"reason":"no effort to earn an OpenSSF best practices badge detected","details":null,"documentation":{"short":"Determines if the project has an OpenSSF (formerly CII) Best Practices Badge.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#cii-best-practices"}},{"name":"Security-Policy","score":0,"reason":"security policy file not detected","details":["Warn: no security policy file detected","Warn: no security file to analyze","Warn: no security file to analyze","Warn: no security file to analyze"],"documentation":{"short":"Determines if the project has published a security policy.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#security-policy"}},{"name":"Fuzzing","score":0,"reason":"project is not fuzzed","details":["Warn: no fuzzer integrations found"],"documentation":{"short":"Determines if the project uses fuzzing.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#fuzzing"}},{"name":"License","score":10,"reason":"license file detected","details":["Info: project has a license file: LICENSE:0","Info: FSF or OSI recognized license: MIT License: LICENSE:0"],"documentation":{"short":"Determines if the project has defined a license.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#license"}},{"name":"Signed-Releases","score":-1,"reason":"no releases found","details":null,"documentation":{"short":"Determines if the project cryptographically signs release artifacts.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#signed-releases"}},{"name":"Branch-Protection","score":0,"reason":"branch protection not enabled on development/release branches","details":["Warn: branch protection not enabled for branch 'master'"],"documentation":{"short":"Determines if the default and release branches are protected with GitHub's branch protection settings.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#branch-protection"}},{"name":"SAST","score":0,"reason":"SAST tool is not run on all commits -- score normalized to 0","details":["Warn: 0 commits out of 5 are checked with a SAST tool"],"documentation":{"short":"Determines if the project uses static code analysis.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#sast"}},{"name":"Vulnerabilities","score":8,"reason":"2 existing vulnerabilities detected","details":["Warn: Project is vulnerable to: GHSA-jr5f-v2jv-69x6","Warn: Project is vulnerable to: GHSA-fjxv-7rqg-78g4"],"documentation":{"short":"Determines if the project has open, known unfixed vulnerabilities.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#vulnerabilities"}}]},"last_synced_at":"2025-08-22T02:14:26.174Z","repository_id":44442462,"created_at":"2025-08-22T02:14:26.174Z","updated_at":"2025-08-22T02:14:26.174Z"},"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28851240,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-28T15:15:36.453Z","status":"ssl_error","status_checked_at":"2026-01-28T15:15:13.020Z","response_time":57,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ad-generator","chatbot","code-generation","conversational-ai","embeddings","intent-classification","keywords-extraction","language-detection","machine-translation","ner","nlp","paraphrasing","question-answering","semantic-similarity","sentiment-analysis","text-classification","text-generation","text-summarization","tokenization"],"created_at":"2024-08-01T18:01:54.984Z","updated_at":"2026-01-28T20:44:34.676Z","avatar_url":"https://github.com/nlpcloud.png","language":"JavaScript","readme":"# Node.js Client For NLP Cloud\n\nThis is the Node.js client (with Typescript types) for the [NLP Cloud](https://nlpcloud.com) API. See the [documentation](https://docs.nlpcloud.com) for more details.\n\nNLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, dialogue summarization, paraphrasing, intent classification, product description and ad generation, chatbot, grammar and spelling correction, keywords and keyphrases extraction, text generation, image generation, text generation, question answering, automatic speech recognition, machine translation, language detection, semantic search, semantic similarity, tokenization, POS tagging, embeddings, and dependency parsing. It is ready for production, served through a REST API.\n\nYou can either use the NLP Cloud pre-trained models, fine-tune your own models, or deploy your own models.\n\nIf you face an issue, don't hesitate to raise it as a Github issue. Thanks!\n\n## Installation\n\nInstall via npm.\n\n```shell\nnpm install nlpcloud --save\n```\n\n## Returned Objects\n\nAll objects returned by the library are [Axios](https://github.com/axios/axios) promises.\n\nIn case of success, results are contained in `response.data`. In case of failure, you can retrieve the status code in `err.response.status` and the error message in `err.response.data.detail`.\n\n## Examples\n\nHere is a full example that summarizes a text using Facebook's Bart Large CNN model, with a fake token:\n\n```js\nconst NLPCloudClient = require('nlpcloud');\n\nconst client = new NLPCloudClient({model:'bart-large-cnn', token:'4eC39HqLyjWDarjtT1zdp7dc'})\n\nclient.summarization(`One month after the United States began what has become a \n  troubled rollout of a national COVID vaccination campaign, the effort is finally \n  gathering real steam. Close to a million doses -- over 951,000, to be more exact -- \n  made their way into the arms of Americans in the past 24 hours, the U.S. Centers \n  for Disease Control and Prevention reported Wednesday. That s the largest number \n  of shots given in one day since the rollout began and a big jump from the \n  previous day, when just under 340,000 doses were given, CBS News reported. \n  That number is likely to jump quickly after the federal government on Tuesday \n  gave states the OK to vaccinate anyone over 65 and said it would release all \n  the doses of vaccine it has available for distribution. Meanwhile, a number \n  of states have now opened mass vaccination sites in an effort to get larger \n  numbers of people inoculated, CBS News reported.`)\n  .then(function (response) {\n    console.log(response.data);\n  })\n  .catch(function (err) {\n    console.error(err.response.status);\n    console.error(err.response.data.detail);\n  });\n```\n\nHere is a full example that does the same thing, but on a GPU:\n\n```js\nconst NLPCloudClient = require('nlpcloud');\n\nconst client = new NLPCloudClient({model:'bart-large-cnn', token:'4eC39HqLyjWDarjtT1zdp7dc'}, true)\n\nclient.summarization(`One month after the United States began what has become a \n  troubled rollout of a national COVID vaccination campaign, the effort is finally \n  gathering real steam. Close to a million doses -- over 951,000, to be more exact -- \n  made their way into the arms of Americans in the past 24 hours, the U.S. Centers \n  for Disease Control and Prevention reported Wednesday. That s the largest number \n  of shots given in one day since the rollout began and a big jump from the \n  previous day, when just under 340,000 doses were given, CBS News reported. \n  That number is likely to jump quickly after the federal government on Tuesday \n  gave states the OK to vaccinate anyone over 65 and said it would release all \n  the doses of vaccine it has available for distribution. Meanwhile, a number \n  of states have now opened mass vaccination sites in an effort to get larger \n  numbers of people inoculated, CBS News reported.`)\n  .then(function (response) {\n    console.log(response.data);\n  })\n  .catch(function (err) {\n    console.error(err.response.status);\n    console.error(err.response.data.detail);\n  });\n```\n\nHere is a full example that does the same thing, but on a French text:\n\n```js\nconst NLPCloudClient = require('nlpcloud');\n\nconst client = new NLPCloudClient({model:'bart-large-cnn', token:'4eC39HqLyjWDarjtT1zdp7dc', gpu:true, lang:'fra_Latn'})\n\nclient.summarization(`Sur des images aériennes, prises la veille par un vol de surveillance \n  de la Nouvelle-Zélande, la côte d’une île est bordée d’arbres passés du vert \n  au gris sous l’effet des retombées volcaniques. On y voit aussi des immeubles\n  endommagés côtoyer des bâtiments intacts. « D’après le peu d’informations\n  dont nous disposons, l’échelle de la dévastation pourrait être immense, \n  spécialement pour les îles les plus isolées », avait déclaré plus tôt \n  Katie Greenwood, de la Fédération internationale des sociétés de la Croix-Rouge.\n  Selon l’Organisation mondiale de la santé (OMS), une centaine de maisons ont\n  été endommagées, dont cinquante ont été détruites sur l’île principale de\n  Tonga, Tongatapu. La police locale, citée par les autorités néo-zélandaises,\n  a également fait état de deux morts, dont une Britannique âgée de 50 ans,\n  Angela Glover, emportée par le tsunami après avoir essayé de sauver les chiens\n  de son refuge, selon sa famille.`)\n  .then(function (response) {\n    console.log(response.data);\n  })\n  .catch(function (err) {\n    console.error(err.response.status);\n    console.error(err.response.data.detail);\n  });\n```\n\nA JSON object is returned:\n\n```json\n{\n  \"summary_text\": \"Over 951,000 doses were given in the past 24 hours. That's the largest number of shots given in one day since the  rollout began. That number is likely to jump quickly after the federal government gave states the OK to vaccinate anyone over 65. A number of states have now opened mass vaccination sites.\"\n}\n```\n\n## Usage\n\n### Client Initialization\n\nPass the model you want to use and the NLP Cloud token to the client during initialization.\n\nThe model can either be a pretrained model like `en_core_web_lg`, `bart-large-mnli`... but also one of your custom models, using `custom_model/\u003cmodel id\u003e` (e.g. `custom_model/2568`).\n\nYour token can be retrieved from your [NLP Cloud dashboard](https://nlpcloud.com/home/token).\n\n```js\nconst NLPCloudClient = require('nlpcloud');\n\nconst client = new NLPCloudClient({model:'\u003cmodel\u003e', token:'\u003cyour token\u003e'})\n```\n\nIf you want to use a GPU, pass `true` as the gpu argument.\n\n```js\nconst NLPCloudClient = require('nlpcloud');\n\nconst client = new NLPCloudClient({model:'\u003cmodel\u003e', token:'\u003cyour token\u003e', gpu:true})\n```\n\nIf you want to use the multilingual add-on in order to process non-English texts, set `'\u003cyour language code\u003e'` as the lang argument. For example, if you want to process French text, you should set `lang:'fra_Latn'`.\n\n```js\nconst NLPCloudClient = require('nlpcloud');\n\nconst client = new NLPCloudClient({model:'\u003cmodel\u003e', token:'\u003cyour token\u003e', lang:'\u003cyour language code\u003e'})\n```\n\nIf you want to make asynchronous requests, pass `true` as the async argument.\n\n```js\nconst NLPCloudClient = require('nlpcloud');\n\nconst client = new NLPCloudClient({model:'\u003cmodel\u003e', token:'\u003cyour token\u003e', async:true)\n```\n\nIf you are making asynchronous requests, you will always receive a quick response containing a URL. You should then poll this URL with `asyncResult()` on a regular basis (every 10 seconds for example) in order to check if the result is available. Here is an example:\n\n```js\nclient.asyncResult('https://api.nlpcloud.io/v1/get-async-result/21718218-42e8-4be9-a67f-b7e18e03b436')\n```\n\nThe above returns an object is the response is available. It returns an empty response otherwise (`null`).\n\n### Automatic Speech Recognition (Speech to Text) Endpoint\n\nCall the `asr()` method and pass the following arguments:\n\n1. (Optional: either this or the encoded file should be set) `url`: a URL where your audio or video file is hosted\n1. (Optional: either this or the url should be set) `encodedFile`: a base 64 encoded version of your file\n1. (Optional) `inputLanguage`: the language of your file as ISO code\n\n```js\nclient.asr({url:'Your url'})\n```\n\n### Chatbot Endpoint\n\nCall the `chatbot()` method and pass the following arguments:\n\n1. Your input\n1. (Optional) `context` A general context about the conversation\n1. (Optional) `history` The history of your previous exchanges with the model\n\n```js\nclient.chatbot({text:'\u003cYour input\u003e'})\n```\n\n### Classification Endpoint\n\nCall the `classification()` method and pass the following arguments:\n\n1. The text you want to classify, as a string\n1. The candidate labels for your text, as an array of strings\n1. (Optional) `multiClass` Whether the classification should be multi-class or not, as a boolean\n\n```js\nclient.classification({text:'\u003cYour block of text\u003e', labels:['label 1', 'label 2', ...]})\n```\n\n### Code Generation Endpoint\n\nCall the `codeGeneration()` method and pass the instruction for the code you want to generate.\n\n```js\nclient.codeGeneration({instruction:'\u003cYour instruction\u003e'})\n```\n\n### Dependencies Endpoint\n\nCall the `dependencies()` method and pass the text you want to perform part of speech tagging (POS) + arcs on.\n\n```js\nclient.dependencies({text:'\u003cYour block of text\u003e'})\n```\n\n### Embeddings Endpoint\n\nCall the `embeddings()` method and pass an array of blocks of text that you want to extract embeddings from.\n\n```js\nclient.embeddings({sentences:['\u003cText 1\u003e', '\u003cText 2\u003e', '\u003cText 3\u003e', ...]})\n```\n\nThe above command returns a JSON object.\n\n### Entities Endpoint\n\nCall the `entities()` method and pass the text you want to perform named entity recognition (NER) on.\n\n```js\nclient.entities({text:'\u003cYour block of text\u003e'})\n```\n\n### Generation Endpoint\n\nCall the `generation()` method and pass the following arguments:\n\n1. The block of text that starts the generated text. 256 tokens maximum for GPT-J on CPU, 1024 tokens maximum for GPT-J and GPT-NeoX 20B on GPU, and 2048 tokens maximum for Fast GPT-J and Finetuned GPT-NeoX 20B on GPU.\n1. (Optional) `maxLength`: Optional. The maximum number of tokens that the generated text should contain. 256 tokens maximum for GPT-J on CPU, 1024 tokens maximum for GPT-J and GPT-NeoX 20B on GPU, and 2048 tokens maximum for Fast GPT-J and Finetuned GPT-NeoX 20B on GPU. If `lengthNoInput` is false, the size of the generated text is the difference between `maxLength` and the length of your input text. If `lengthNoInput` is true, the size of the generated text simply is `maxLength`. Defaults to 50.\n1. (Optional) `lengthNoInput`: Whether `minLength` and `maxLength` should not include the length of the input text, as a boolean. If false, `minLength` and `maxLength` include the length of the input text. If true, min_length and `maxLength` don't include the length of the input text. Defaults to false.\n1. (Optional) `endSequence`: A specific token that should be the end of the generated sequence, as a string. For example if could be `.` or `\\n` or `###` or anything else below 10 characters.\n1. (Optional) `removeInput`: Whether you want to remove the input text form the result, as a boolean. Defaults to false.\n1. (Optional) `numBeams`: Number of beams for beam search. 1 means no beam search. This is an integer. Defaults to 1.\n1. (Optional) `numReturnSequences`: The number of independently computed returned sequences for each element in the batch, as an integer. Defaults to 1.\n1. (Optional) `topK`: The number of highest probability vocabulary tokens to keep for top-k-filtering, as an integer. Maximum 1000 tokens. Defaults to 0.\n1. (Optional) `topP`: If set to float \u003c 1, only the most probable tokens with probabilities that add up to top_p or higher are kept for generation. This is a float. Should be between 0 and 1. Defaults to 0.7.\n1. (Optional) `temperature`: The value used to module the next token probabilities, as a float. Should be between 0 and 1. Defaults to 1.\n1. (Optional) `repetitionPenalty`: The parameter for repetition penalty, as a float. 1.0 means no penalty. Defaults to 1.0.\n1. (Optional) `badWords`: List of tokens that are not allowed to be generated, as a list of strings. Defaults to null.\n1. (Optional) `removeEndSequence`: Optional. Whether you want to remove the `endSequence` string from the result. Defaults to false.\n\n```js\nclient.generation({text:'\u003cYour input text\u003e'})\n```\n\n### Grammar and Spelling Correction Endpoint\n\nCall the `gsCorrection()` method and pass the text you want to correct.\n\n```js\nclient.gsCorrection({text:'\u003cThe text you want to correct\u003e'})\n```\n\n### Image Generation Endpoint\n\nCall the `imageGeneration()` method and pass the text you want to use to generate your image.\n\n```js\nclient.imageGeneration({text:'\u003cYour text instruction\u003e'})\n```\n\n### Intent Classification Endpoint\n\nCall the `intentClassification()` method and pass the text you want to analyze in order to detect the intent.\n\n```js\nclient.intentClassification({text:'\u003cThe text you want to analyze\u003e'})\n```\n\n### Keywords and Keyphrases Extraction Endpoint\n\nCall the `kwKpExtraction()` method and pass the text you want to extract keywords and keyphrases from.\n\n```js\nclient.kwKpExtraction({text:'\u003cThe text you want to analyze\u003e'})\n```\n\n### Language Detection Endpoint\n\nCall the `langdetection()` method and pass the text you want to analyze in order to detect the languages.\n\n```js\nclient.langdetection({text:'\u003cThe text you want to analyze\u003e'})\n```\n\n### Question Answering Endpoint\n\nCall the `question()` method and pass the following:\n\n1. Your question\n1. (Optional) A context that the model will use to try to answer your question\n\n```js\nclient.question({question:'\u003cYour question\u003e', context:'\u003cYour context\u003e'})\n```\n\n### Semantic Search Endpoint\n\nCall the `semanticSearch()` method and pass your search query.\n\n```python\nclient.semanticSearch('Your search query')\n```\n\nThe above command returns a JSON object.\n\n### Semantic Similarity Endpoint\n\nCall the `semanticSimilarity()` method and pass an array made up of 2 blocks of text that you want to compare.\n\n```python\nclient.semanticSimilarity({sentences:['\u003cBlock of text 1\u003e', '\u003cBlock of text 2\u003e']})\n```\n\nThe above command returns a JSON object.\n\n### Sentence Dependencies Endpoint\n\nCall the `sentenceDependencies()` method and pass a block of text made up of several sentences you want to perform POS + arcs on.\n\n```js\nclient.sentenceDependencies({text:'\u003cYour block of text\u003e'})\n```\n\n### Sentiment Analysis Endpoint\n\nCall the `sentiment()` method and pass the following:\n\n1. The text you want to get the sentiment of\n1. (Optional) The target element that the sentiment should apply to\n\n```js\nclient.sentiment({text:'\u003cYour block of text\u003e', target:'\u003cYour target\u003e'})\n```\n\n### Speech Synthesis Endpoint\n\nCall the `speechSynthesis()` method and pass the text you want to convert to audio:\n\n```js\nclient.speechSynthesis({text:\"\u003cYour block of text\u003e\"})\n```\n\nThe above command returns a JSON object.\n\n### Summarization Endpoint\n\nCall the `summarization()` method and pass the text you want to summarize.\n\n```js\nclient.summarization({text:'\u003cYour text to summarize\u003e'})\n```\n\n### Paraphrasing Endpoint\n\nCall the `paraphrasing()` method and pass the text you want to paraphrase.\n\n```js\nclient.paraphrasing({text:'\u003cYour text to paraphrase\u003e'})\n```\n\n### Tokenization Endpoint\n\nCall the `tokens()` method and pass the text you want to tokenize.\n\n```js\nclient.tokens({text:'\u003cYour block of text\u003e'})\n```\n\n### Translation Endpoint\n\nCall the `translation()` method and pass the text you want to translate.\n\n```js\nclient.translation({text:'\u003cYour text to translate\u003e'})\n```\n","funding_links":[],"categories":["chatbot"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fnlpcloud%2Fnlpcloud-js","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fnlpcloud%2Fnlpcloud-js","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fnlpcloud%2Fnlpcloud-js/lists"}