Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/socathie/grok-vs-llama
https://github.com/socathie/grok-vs-llama
Last synced: 2 months ago
JSON representation
- Host: GitHub
- URL: https://github.com/socathie/grok-vs-llama
- Owner: socathie
- Created: 2024-03-23T07:54:06.000Z (10 months ago)
- Default Branch: main
- Last Pushed: 2024-04-04T17:38:36.000Z (9 months ago)
- Last Synced: 2024-08-02T13:29:01.118Z (5 months ago)
- Language: Solidity
- Size: 2.39 MB
- Stars: 2
- Watchers: 1
- Forks: 0
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome-ora - Grok vs Llama - 2024/projects/clu3sufsj0imuw201g0peq4s1/idea). (π€ AI Oracle Ecosystem / General)
README
# Grok vs Llama
![](cover.png)
Grok, a 314B parameter LLM by X, was open-sourced recently. Grok is considered the pinnacle of open source LLM.
However, the original Grok without prompt engineering was often seen as having too much of its own "personality" and often answering the wrong questions. Or, a state called "Glitch" (repeating the same phrase like "Grok Grok Grok Grok" over and over).
In our project, we implemented Grok vs LlaMA2, where LlaMA2, another open source LLM, monitors whether Grok is in a Glitch state, and if so, the user who initiated the AI request is rewarded with a bounty for finding Grok's "weakness".
In our case, all inference calculations for Grok and LlaMA2 models, as well as the bounty sending process, are onchain, ensuring end-to-end verifiability.
## How Itβs Made
We utilized ORA's Onchain AI Oracle (https://docs.ora.io/doc/cle/ai-oracle) to get onchain Grok and LlaMA2.
## Usage
```shell
npx hardhat ignition deploy ignition/modules/GrokVsLlama.js --network sepolia --verify
npx hardhat run scripts/estimateGasLimit.js --network sepolia
npx hardhat run scripts/testOnSepolia.js --network sepolia
```## Future
This project provides a foundation for AI to monitor each other.In the future network of AI agents, we can realize the mutual monitoring of AI in this way, so as to reduce the intentional or unintentional evil behavior of AI. At the same time, this approach can realize a completely autonomous AI network system without human intervention.