https://github.com/2003harsh/sentiment-analysis-using-bert
"Sentiment Analysis using BERT" utilizes advanced NLP with BERT for precise sentiment analysis, surpassing traditional methods like Word2Vec. Gain valuable insights into customer feedback and social sentiment effortlessly. ๐๐ฌ #NLP #SentimentAnalysis #BERT
https://github.com/2003harsh/sentiment-analysis-using-bert
beginner-friendly bert sentiment-analysis transformer
Last synced: 7 days ago
JSON representation
"Sentiment Analysis using BERT" utilizes advanced NLP with BERT for precise sentiment analysis, surpassing traditional methods like Word2Vec. Gain valuable insights into customer feedback and social sentiment effortlessly. ๐๐ฌ #NLP #SentimentAnalysis #BERT
- Host: GitHub
- URL: https://github.com/2003harsh/sentiment-analysis-using-bert
- Owner: 2003HARSH
- License: mit
- Created: 2024-05-14T08:01:29.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2025-06-09T13:55:37.000Z (about 1 month ago)
- Last Synced: 2025-06-09T14:44:35.959Z (about 1 month ago)
- Topics: beginner-friendly, bert, sentiment-analysis, transformer
- Language: Jupyter Notebook
- Homepage:
- Size: 187 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Sentiment Analysis using BERT
## Overview
Sentiment Analysis using BERT is a Natural Language Processing (NLP) project focused on identifying sentiment from textual data. This project highlights how **context-aware embeddings**โgenerated using BERTโcan vastly improve sentiment classification accuracy, especially in handling negations and subtle expressions like *"I didn't like the movie"* or *"The movie was not good."*
### Sample:

## Features
* **Context-Aware Embeddings**: BERT is used to generate contextualized word embeddings that understand the position and meaning of words in full sentences.
* **Custom Classification Head**: The `[CLS]` token output from BERT is passed into a custom **feedforward neural network (768 ร 3)** for classifying sentiment into three categories (e.g., positive, negative, neutral).
* **Robust Against Negations**: Unlike traditional models like Bag of Words (BoW), TF-IDF, or Word2Vec, BERT embeddings preserve sentence structure and context. This allows the model to correctly classify difficult inputs like:
* `"I didn't like the movie"` โ Negative
* `"The movie was not good"` โ Negative* **Bidirectional Understanding**: BERTโs bidirectional transformer architecture captures dependencies from both left and right contexts, providing deeper language understanding.
---
## Getting Started
1. **Clone the repository**:
```bash
git clone https://github.com/2003HARSH/Sentiment-Analysis-using-BERT.git
cd Sentiment-Analysis-using-BERT
```2. **Install the required dependencies**:
```bash
pip install -r requirements.txt
```3. **Refer to the notebook**:
Open `sentiment-analysis-using-bert.ipynb` to see data preprocessing, model training, and evaluation.---
## Contributing
Contributions are welcome! If you'd like to contribute, please fork the repository and submit a pull request with your enhancements.
---
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
---
Feel free to explore, experiment, and contribute to Sentiment Analysis using BERT. Let's unlock the power of language together! ๐๐ฌ
**#SentimentAnalysis #BERT #NLP #MachineLearning #DeepLearning**---
Let me know if you want a version of this written in more technical paper style (for a portfolio or blog).