https://github.com/hariprasath-v/machinehack_intel_oneapi_hackathon_the_llm_challenge
Generate a response for the question from pre-defined text using LLM(Extracted Question-Answering(QA) Model).
https://github.com/hariprasath-v/machinehack_intel_oneapi_hackathon_the_llm_challenge
accuracy exploratory-data-analysis extractive-question-answering huggingface machine-learning matplotlib nlp nltk numpy pandas python seaborn sklearn spacy spellchecker wordcloud
Last synced: 2 months ago
JSON representation
Generate a response for the question from pre-defined text using LLM(Extracted Question-Answering(QA) Model).
- Host: GitHub
- URL: https://github.com/hariprasath-v/machinehack_intel_oneapi_hackathon_the_llm_challenge
- Owner: hariprasath-v
- License: apache-2.0
- Created: 2023-10-31T14:27:14.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2023-11-07T15:43:53.000Z (over 1 year ago)
- Last Synced: 2025-01-13T01:44:55.938Z (4 months ago)
- Topics: accuracy, exploratory-data-analysis, extractive-question-answering, huggingface, machine-learning, matplotlib, nlp, nltk, numpy, pandas, python, seaborn, sklearn, spacy, spellchecker, wordcloud
- Language: HTML
- Homepage:
- Size: 2.72 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Machinehack_Intel_oneapi_hackathon_the_llm_challenge
### Competition hosted on Machinehack
# About
### Generate a response for the question from pre-defined text using LLM(Extracted Question-Answering(QA) Model).
### The Final Competition score is 0.25114
### Final Leaderboard Rank is 9/35
### The Evaluation Metric is Accuracy.
### File information
* mh-intel-oneapi-hackathon-the-llm-challenge-eda.ipynb [](https://www.kaggle.com/code/hari141v/mh-intel-oneapi-hackathon-the-llm-challenge-eda)
#### Basic Exploratory Data Analysis
#### Packages Used,
* seaborn
* Pandas
* Numpy
* Matplotlib
* nltk
* spacy
* wordcloud
* spellchecker
* sklearn* mh-intel-oneapi-hackathon-the-llm-challenge-model.ipynb [](https://www.kaggle.com/code/hari141v/mh-intel-oneapi-hackathon-the-llm-challenge-model2)
#### I have directly used a pre-trained model without fine-tuning it on the training data, primarily due to my limited knowledge in NLP-QA tasks. I loaded and predicted the test data using the transformers inference pipeline.
#### Packages Used,
* Pandas
* Huggingface