{"id":24762276,"url":"https://github.com/developersdigest/llm-api-engine","last_synced_at":"2025-05-15T13:05:07.534Z","repository":{"id":274062321,"uuid":"921788419","full_name":"developersdigest/llm-api-engine","owner":"developersdigest","description":"Build and deploy AI-powered APIs in seconds","archived":false,"fork":false,"pushed_at":"2025-02-03T01:30:25.000Z","size":166,"stargazers_count":709,"open_issues_count":0,"forks_count":80,"subscribers_count":5,"default_branch":"main","last_synced_at":"2025-04-05T18:18:15.537Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"https://www.youtube.com/watch?v=8kUeK1Bo4mM","language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/developersdigest.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2025-01-24T16:08:47.000Z","updated_at":"2025-04-02T02:16:18.000Z","dependencies_parsed_at":"2025-01-24T17:23:00.875Z","dependency_job_id":"4c145751-9f95-4434-a670-f1c1b2404b53","html_url":"https://github.com/developersdigest/llm-api-engine","commit_stats":null,"previous_names":["developersdigest/llm-api-engine"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/developersdigest%2Fllm-api-engine","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/developersdigest%2Fllm-api-engine/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/developersdigest%2Fllm-api-engine/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/developersdigest%2Fllm-api-engine/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/developersdigest","download_url":"https://codeload.github.com/developersdigest/llm-api-engine/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248961186,"owners_count":21189991,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2025-01-28T19:29:15.479Z","updated_at":"2025-04-14T20:57:35.771Z","avatar_url":"https://github.com/developersdigest.png","language":"TypeScript","readme":"# LLM API Engine\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://img.youtube.com/vi/8kUeK1Bo4mM/maxresdefault.jpg\" alt=\"LLM API Engine\" width=\"600\"/\u003e\n\u003c/p\u003e\n\nBuild and deploy AI-powered APIs in seconds. This project allows you to create custom APIs that extract structured data from websites using natural language descriptions, powered by LLMs and web scraping technology.\n\n## Features\n\n- 🤖 Natural Language API Creation - Describe your data needs in plain English\n- 🔄 Automatic Schema Generation using OpenAI\n- 🌐 Intelligent Web Scraping with Firecrawl\n- ⚡ Real-time Data Updates with scheduled scraping\n- 🚀 Instant API Deployment\n- 📊 Structured Data Output with JSON Schema validation\n- 💾 Redis-powered Caching and Storage\n\n## Architecture\n\nThe LLM API Engine is designed with flexibility in mind:\n\n1. **API Builder**: The Next.js application serves as the builder interface where you create and configure your endpoints.\n2. **Consumable Endpoints**: Once created, your API endpoints can be deployed and consumed anywhere:\n   - Cloudflare Workers (documentation coming soon)\n   - Vercel Edge Functions\n   - AWS Lambda\n   - Any platform that can handle HTTP requests\n\nThis decoupled architecture means you can:\n- Use the Next.js app solely for endpoint creation and management\n- Deploy your consumable endpoints separately for optimal performance\n- Scale your API consumption independent of the management interface\n\n## Tech Stack\n\n- **Frontend**: Next.js 14, React 18, TailwindCSS\n- **APIs**: OpenAI, Firecrawl, Upstash Redis\n- **Data Validation**: Zod\n- **Animations**: Framer Motion\n- **Deployment**: Vercel\n\n## Getting Started\n\n### Prerequisites\n\n- Node.js 18+\n- npm/yarn/pnpm\n- Upstash Redis account\n- OpenAI API key\n- Firecrawl API key\n\n### Installation\n\n1. Clone the repository:\n```bash\ngit clone https://github.com/developersdigest/llm-api-engine.git\ncd llm-api-engine\n```\n\n2. Install dependencies:\n```bash\nnpm install\n```\n\n3. Create a `.env` file with the following variables:\n```env\nOPENAI_API_KEY=your_openai_key\nNEXT_PUBLIC_FIRECRAWL_API_KEY=your_firecrawl_key\nSERPER_API_KEY=your_serper_key\nUPSTASH_REDIS_REST_URL=your_redis_url\nUPSTASH_REDIS_REST_TOKEN=your_redis_token\nNEXT_PUBLIC_API_ROUTE=http://localhost:3000  # Your API base URL\n```\n\n4. Run the development server:\n```bash\nnpm run dev\n```\n\nOpen [http://localhost:3000](http://localhost:3000) to see the application.\n\n## Deployment Options\n\nThe LLM API Engine is designed with a modular architecture that separates the API builder interface from the actual API endpoints. This means you can:\n\n1. **Use the Builder Interface Only**\n   - Deploy the Next.js app for API creation and management\n   - Use it to generate and test your API configurations\n   - Store configurations in Redis for later use\n\n2. **Independent API Deployment**\n   - Take the generated route configurations and deploy them anywhere\n   - Implement the routes in your preferred framework:\n     ```typescript\n     // Example with Hono\n     import { Hono } from 'hono'\n     const app = new Hono()\n     \n     app.get('/api/results/:endpoint', async (c) =\u003e {\n       const data = await redis.get(`api/results/${c.req.param('endpoint')}`)\n       return c.json(data)\n     })\n     ```\n   - Framework options:\n     - Cloudflare Workers with Hono\n     - Express.js standalone server\n     - AWS Lambda with API Gateway\n     - Any HTTP server framework\n\n3. **Hybrid Approach**\n   - Use the builder for configuration\n   - Deploy endpoints separately for optimal performance\n   - Keep configurations in sync via Redis\n\nThis flexibility allows you to:\n- Scale API endpoints independently\n- Choose the best deployment platform for your needs\n- Optimize for cost and performance\n- Maintain full control over your API infrastructure\n\n## Usage\n\n1. **Describe Your API**: Enter a natural language description of the data you want to extract\n2. **Generate Schema**: The system will automatically generate a JSON schema\n3. **Configure Sources**: Select websites to extract data from\n4. **Deploy**: Get an instant API endpoint with your structured data\n\n### Example\n\n```bash\n# Create an API to extract company information\ncurl -X POST \"https://your-domain.com/api/deploy\" \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"query\": \"Extract company name, revenue, and employee count\",\n    \"urls\": [\"https://example.com/company\"],\n    \"schedule\": \"0 5 * * *\"\n  }'\n```\n\n## API Documentation\n\n### Endpoints\n\n- `POST /api/generate-schema` - Generate JSON schema from description\n- `POST /api/extract` - Extract data from URLs\n- `POST /api/deploy` - Deploy a new API endpoint\n- `GET /api/routes` - List all deployed routes\n- `GET /api/results/:endpoint` - Get results for a specific endpoint\n\n### CRON Implementation (Coming Soon)\n\nThe LLM API Engine will support automated data updates through various CRON implementations:\n\n1. **Vercel Cron Jobs (Free Tier)**\n   - Leverage Vercel's built-in CRON functionality\n   - Free tier includes 1 execution per day\n   - Configure via `vercel.json`:\n   ```json\n   {\n     \"crons\": [{\n       \"path\": \"/api/cron/update\",\n       \"schedule\": \"0 0 * * *\"\n     }]\n   }\n   ```\n\n2. **Upstash QStash (Alternative)**\n   - Reliable scheduling service with more frequent updates\n   - Better control over execution timing\n   - Webhook-based triggering\n\n3. **GitHub Actions Workflow**\n   - Free alternative for open-source projects\n   - Flexible scheduling options\n   - Direct integration with your repository\n\nChoose the implementation that best fits your needs based on:\n- Required update frequency\n- Budget constraints\n- Infrastructure preferences\n\nStay tuned for detailed implementation guides for each option!\n\n### API Usage Example\n\nTo fetch data from your deployed endpoint:\n\n```bash\ncurl -X GET \"${API_ROUTE}/api/results/nvidia-market-cap\" \\\n  -H \"Authorization: Bearer sk_your_api_key\" \\\n  -H \"Content-Type: application/json\"\n```\n\nThe API will return data in the following format:\n\n```json\n{\n  \"success\": true,\n  \"data\": {\n    // Your extracted data here\n  },\n  \"lastUpdated\": \"2024-01-01T00:00:00.000Z\",\n  \"sources\": [\n    \"https://example.com/source1\",\n    \"https://example.com/source2\"\n  ]\n}\n```\n\n## Contributing\n\n1. Fork the repository\n2. Create your feature branch (`git checkout -b feature/amazing-feature`)\n3. Commit your changes (`git commit -m 'Add amazing feature'`)\n4. Push to the branch (`git push origin feature/amazing-feature`)\n5. Open a Pull Request\n\n## License\n\nThis project is licensed under the MIT License - see the LICENSE file for details.\n\n## Acknowledgments\n\n- Built with [Next.js](https://nextjs.org/)\n- Powered by [OpenAI](https://openai.com/)\n- Web scraping by [Firecrawl](https://firecrawl.dev/)\n- Data storage by [Upstash](https://upstash.com/)\n\n## Roadmap\n\n### 🚧 In Progress: CRON Functionality\n\nCurrently working on implementing scheduled data extraction with the following planned features:\n- Backend CRON implementation using Vercel\n- Rate limiting and retry mechanisms\n- Job queue for concurrent scrapes\n- Schedule management dashboard\n- Job history and monitoring\n- Email notifications for failed jobs\n# llm-api-engine\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdevelopersdigest%2Fllm-api-engine","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdevelopersdigest%2Fllm-api-engine","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdevelopersdigest%2Fllm-api-engine/lists"}