An open API service indexing awesome lists of open source software.

https://github.com/murapadev/testingcustomtitansimplementations


https://github.com/murapadev/testingcustomtitansimplementations

Last synced: 2 months ago
JSON representation

Awesome Lists containing this project

README

        

# Testing some State of The Art custom models of the Google Titans Architecture

[![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/release/python-380/)
[![Hugging Face](https://img.shields.io/badge/🤗%20Transformers-Enabled-yellow.svg)](https://huggingface.co/transformers/)
[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT)
[![Experimental](https://img.shields.io/badge/Status-Experimental-orange.svg)]()

## Overview

This repository contains experimental code and evaluations for testing various open-source models that claim to implement or be inspired by Google's Titans architecture. Google Titans represents a series of state-of-the-art AI models developed by Google, known for their advanced capabilities in natural language understanding and generation.

## Purpose

The goal of this project is to:

1. Test the performance of different open-source models that claim to be based on or inspired by the Google Titans architecture.

2. Evaluate their capabilities in various tasks such as text generation, summarization, and question answering.

3. Collect metrics to compare and analyze model performance across different scenarios.

## Features

- **Comprehensive Testing Framework**: Evaluate models across different categories of questions
- **Performance Metrics**: Measure response time, token count, and other quantitative metrics
- **Comparison Tools**: Compare results between different models with visual representations
- **Configurable Testing**: Easily add new models and configure parameters

## Getting Started

### Prerequisites

```bash
pip install -r requirements.txt
```

### Running Tests

To evaluate models:

```bash
python evaluate.py
```

Results will be stored in the `results/` directory with a timestamp.

## Configuration

### Model Configuration

Models can be added and configured in the `model_config.json` file:

```json
{
"models": [
{
"id": "model-id",
"name": "Model Name",
"path": "huggingface/model-path",
"description": "Description of the model",
"parameters": {
"max_length": 150,
"temperature": 0.7,
"top_p": 0.9
}
}
]
}
```

### Test Questions

Questions are organized by category in `questions.json`. You can modify or extend these questions to test different aspects of the models.

## Models Tested

- [Titan Transformer](https://huggingface.co/rajveer43/titan-transformer) - Base implementation of the Titan architecture

## Test Categories

- **Facts**: Testing knowledge retrieval and factual accuracy
- **Thinking**: Testing reasoning and problem-solving capabilities
- **Code**: Testing code generation abilities
- **Art**: Testing creative writing capabilities

## Disclaimer

This is an independent research project and is not affiliated with or endorsed by Google. The models tested here are third-party implementations that claim compatibility or similarity with the Google Titans architecture.