Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/saransh-cpp/memetastic-backend
Created for Elasticsearch X Code for Cause Hackathon. This is the backend for my MemeTastic Application. (Winner for the hackathon)
https://github.com/saransh-cpp/memetastic-backend
axios backend backend-api cicd elasticsearch express github-actions hackathon kibana ngrams nodejs
Last synced: about 14 hours ago
JSON representation
Created for Elasticsearch X Code for Cause Hackathon. This is the backend for my MemeTastic Application. (Winner for the hackathon)
- Host: GitHub
- URL: https://github.com/saransh-cpp/memetastic-backend
- Owner: Saransh-cpp
- Created: 2021-05-23T10:27:27.000Z (over 3 years ago)
- Default Branch: main
- Last Pushed: 2021-06-11T10:47:58.000Z (over 3 years ago)
- Last Synced: 2024-12-30T22:15:02.660Z (13 days ago)
- Topics: axios, backend, backend-api, cicd, elasticsearch, express, github-actions, hackathon, kibana, ngrams, nodejs
- Language: JavaScript
- Homepage:
- Size: 26.4 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# MemeTastic-backend
Created for Elasticsearch X Code for Cause Hackathon. (Winner for the hackthon - [devpost project](https://devpost.com/software/memetastic))
This is the backend for my MemeTastic Application.
## How it works
- Scrapes top 100 memes from subreddit `memes` using reddit's public API.
- Adds the `link` and the `title` of those 100 memes in `elasticsearch` which can then be visualised using `kibana`.
- `Elasticsearch` has been hosted on `Google Cloud`.
- The first two steps are repeated after every 30 minutes using a CI/CD pipeline created using `GitHub Actions` to make sure that the Application stays up-to-date.
## Ngrams
- This also uses ngrams, implemetation is given below (implemeted using `kibana`)
```JSON
PUT dankmemes
{
"settings": {
"analysis": {
"analyzer": {
"autocomplete": {
"tokenizer": "autocomplete",
"filter": [
"lowercase"
]
},
"autocomplete_search": {
"tokenizer": "lowercase"
}
},
"tokenizer": {
"autocomplete": {
"type": "edge_ngram",
"min_gram": 1,
"max_gram": 10,
"token_chars": [
"letter"
]
}
}
}
},
"mappings": {
"properties": {
"title": {
"type": "text",
"analyzer": "autocomplete",
"search_analyzer": "autocomplete_search"
}
}
}
}
```