{"id":23521198,"url":"https://github.com/nascarjake/luminary","last_synced_at":"2025-04-19T19:41:16.553Z","repository":{"id":268038090,"uuid":"902697897","full_name":"nascarjake/luminary","owner":"nascarjake","description":"AI Pipeline builder with support for OpenAI","archived":false,"fork":false,"pushed_at":"2024-12-22T02:44:36.000Z","size":32124,"stargazers_count":3,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2024-12-22T03:20:20.902Z","etag":null,"topics":["ai","aipipeline","aitool","angular","electron","nodejs","openai"],"latest_commit_sha":null,"homepage":"https://jakedoesdev.com","language":"JavaScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/nascarjake.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-12-13T04:56:48.000Z","updated_at":"2024-12-22T02:44:39.000Z","dependencies_parsed_at":null,"dependency_job_id":"0e3e46d8-322e-41ea-945c-ca52c46ce96b","html_url":"https://github.com/nascarjake/luminary","commit_stats":null,"previous_names":["nascarjake/luminary"],"tags_count":8,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nascarjake%2Fluminary","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nascarjake%2Fluminary/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nascarjake%2Fluminary/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nascarjake%2Fluminary/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/nascarjake","download_url":"https://codeload.github.com/nascarjake/luminary/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":231260790,"owners_count":18349461,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","aipipeline","aitool","angular","electron","nodejs","openai"],"created_at":"2024-12-25T17:11:02.930Z","updated_at":"2025-04-19T19:41:16.529Z","avatar_url":"https://github.com/nascarjake.png","language":"JavaScript","readme":"# ![Luminary Logo](https://raw.githubusercontent.com/nascarjake/luminary/main/logo.png)\n\n\u003ch1 align=\"center\"\u003eAI Pipeline Tool\u003c/h1\u003e\n\u003cp align=\"center\"\u003eA visual development platform for creating complex AI workflows using OpenAI assistants.\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"https://github.com/nascarjake/luminary/releases/latest\"\u003e\n    \u003cimg src=\"https://img.shields.io/github/v/release/nascarjake/luminary?label=Latest%20Release\u0026style=for-the-badge\" alt=\"Latest Release\" /\u003e\n  \u003c/a\u003e\n  \u003ca href=\"https://github.com/nascarjake/luminary/releases/latest\"\u003e\n    \u003cimg src=\"https://img.shields.io/github/downloads/nascarjake/luminary/total?label=Total%20Downloads\u0026style=for-the-badge\" alt=\"Downloads\" /\u003e\n  \u003c/a\u003e\n  \u003ca href=\"LICENSE.md\"\u003e\n    \u003cimg src=\"https://img.shields.io/github/license/nascarjake/luminary?style=for-the-badge\" alt=\"License\" /\u003e\n  \u003c/a\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"https://github.com/nascarjake/luminary/releases/download/v0.1.1/luminary_0.1.1_amd64.deb\"\u003e\u003cimg src=\"https://img.shields.io/badge/Linux-FCC624?style=for-the-badge\u0026logo=linux\u0026logoColor=black\" alt=\"Linux Release\"\u003e\u003c/a\u003e\n  \u003ca href=\"https://github.com/nascarjake/luminary/releases/download/v0.1.1/Luminary-Mac-Intel-0.1.1.zip\"\u003e\u003cimg src=\"https://img.shields.io/badge/mac%20os%20intel-000000?style=for-the-badge\u0026logo=apple\u0026logoColor=white\" alt=\"MacOS Intel Release\"\u003e\u003c/a\u003e\n  \u003ca href=\"https://github.com/nascarjake/luminary/releases/download/v0.1.1/Luminary-Mac-Apple-Silicon-0.1.1.zip\"\u003e\u003cimg src=\"https://img.shields.io/badge/mac%20os%20arm%20(m*)-000000?style=for-the-badge\u0026logo=apple\u0026logoColor=white\" alt=\"MacOS ARM Release\"\u003e\u003c/a\u003e\n  \u003ca href=\"https://github.com/nascarjake/luminary/releases/download/v0.1.1/Luminary.Setup.0.1.1.exe\"\u003e\u003cimg src=\"https://img.shields.io/badge/Windows%2064%20Bit-0078D6?style=for-the-badge\u0026logo=windows\u0026logoColor=white\" alt=\"Windows 64 Bit Release\"\u003e\u003c/a\u003e\u003cbr\u003e\n  \u003cstrong\u003e\u003ca href=\"https://www.youtube.com/watch?v=LyOMJq47ASQ\"\u003eDemo\u003c/a\u003e \u0026nbsp;·\u0026nbsp; \u003ca href=\"https://www.canva.com/design/DAGZZUBLnFc/-VlBs8UTq1k_2Hc5fHsC6Q/edit?utm_content=DAGZZUBLnFc\u0026utm_campaign=designshare\u0026utm_medium=link2\u0026utm_source=sharebutton\"\u003ePresentation\u003c/a\u003e \u0026nbsp;·\u0026nbsp; \u003ca href=\"https://jakedoesdev.com\"\u003eDiscord\u003c/a\u003e\u003c/strong\u003e\n\u003c/p\u003e\n\n---\n\n## Table of Contents\n- [Introduction](#introduction)\n- [Key Features](#key-features)\n- [Screenshots](#screenshots)\n- [Installation](#installation)\n- [Getting Started](#getting-started)\n- [Building](#building)\n- [Environment Setup (Optional)](#environment-setup-optional)\n- [OpenAI Integration](#openai-integration)\n- [Building a Pipeline](#building-a-pipeline)\n- [Profiles](#profiles)\n- [Projects](#projects)\n- [Scheduling](#scheduling)\n- [Functions](#functions)\n- [Roadmap](#roadmap)\n- [Community](#community)\n- [Credits](#credits)\n- [License](#license)\n\n---\n\n## Introduction\n\nLuminary provides tools for building workflows where multiple AI assistants can collaborate. It also supports custom code integration for more advanced data processing. By splitting complex tasks into smaller roles, you can reduce errors and hallucinations, and keep your projects organized and maintainable.\n\n\u003e **Note**  \n\u003e **Download for free** from the [Releases](https://github.com/nascarjake/luminary/releases/latest), check out the [Demo Video](https://www.youtube.com/watch?v=g46q1IjClz8), and explore the [Overview Presentation](https://www.canva.com/design/DAGZZUBLnFc/-VlBs8UTq1k_2Hc5fHsC6Q/edit?utm_content=DAGZZUBLnFc\u0026utm_campaign=designshare\u0026utm_medium=link2\u0026utm_source=sharebutton). Join our community on [Discord](https://jakedoesdev.com)!\n\n---\n\n## Luminary Builds Businesses\n\n**Want to start a merch shop?**\n- Create a pipeline that generates images, sends them to teespring\n- Create merch items on teespring like shirts and mugs using the images (via api)\n- Select and buy a domain and connect it to your teespring site (via cloudflare api or something like that)\n- Create seo metadata for your site\n- Create social media posts of your products and schedule them to post.\n\n**Want to start a survey website/app?**\n- Create an ai that asks the user what type of survey they want to build\n- Collect information about what type of recommednations or call to actions need to be made\n- Generate a data object that can be plugged into an app you are building which provides a great survey experience.\n\nIf you are building an app or tool that lets people make their own guide, website, flyer, schedule, trading bot, anything.... then you can use AI to make a pipeline that will make using your tool as easy as talking.\n\nYou design that AI pipeline in luminary, then export it to the standalone luminary pipeline engine library which can be imported into your project. This means you use the luminary tool to build and test the pipeline, then use it to run your own app or business.\n\n---\n\n## Key Features\n- **Multiple AI Assistants** — Build complex AI workflows by orchestrating specialized assistants.\n- **Visual Graph Editor** — Drag-and-drop nodes to design AI pipelines without tangling in code.\n- **Custom Code Integration** — Extend functionality with Node.js, Python, or any language via command line.\n- **Profile \u0026 Project Management** — Keep everything from pipeline configurations to custom functions neatly organized.\n- **Scheduling \u0026 Automation** — Run pipelines at set times, perfect for routine tasks or batch processes.\n- **Desktop App \u0026 NPM Library** — Use the visual environment or integrate workflows directly into your own Node.js projects.\n\n---\n\n## Screenshots\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://raw.githubusercontent.com/nascarjake/luminary/main/screen_graph.png\" alt=\"Graph Editor\" width=\"49%\" /\u003e\n  \u003cimg src=\"https://raw.githubusercontent.com/nascarjake/luminary/main/screen_chat.png\" alt=\"Chat Interface\" width=\"49%\" /\u003e\u003cbr /\u003e\n  \u003cimg src=\"https://raw.githubusercontent.com/nascarjake/luminary/main/screen_schedule.png\" alt=\"Scheduling UI\" width=\"49%\" /\u003e\n\u003c/p\u003e\n\n---\n\n## Installation\n\nTo get started with Luminary, follow these steps:\n\n### Desktop Application\n1. **Download the latest version** for your platform from the [GitHub releases page](https://github.com/nascarjake/luminary/releases/latest).  \n   - **Windows**: `.exe` installer  \n   - **macOS**: `.zip` file  \n   - **Linux**: `.deb` package\n\n2. **Install** for your platform:\n   - **Windows**: Run the `.exe` and follow the setup.  \n   - **macOS**:  \n     - Extract the zip  \n     - Double-click `install.command`  \n     - Drag Luminary to `Applications`  \n     - *App is not signed/notarized, so Gatekeeper may prompt you*  \n     - *If you get an error about the app being damaged, run:*\n       ```bash\n       xattr -c /Applications/Luminary.app\n       ```\n   - **Linux**:  \n     - ```bash\n       sudo dpkg -i luminary_x.x.x_\u003carch\u003e.deb\n       ```\n       Replace `\u003carch\u003e` with either `arm64` or `amd64` based on your download\n     - If you encounter any dependency issues, run:\n       ```bash\n       sudo apt-get install -f\n       ```\n     - Launch Luminary from your applications menu or by running `luminary` in the terminal\n\n\u003e **Note**  \n\u003e No need for Node.js or Python unless you plan on running custom functions that require them.\n\n---\n\n## Getting Started\n\nWatch our [Demo Video](https://www.youtube.com/watch?v=g46q1IjClz8) to see Luminary in action! Then follow these steps:\n\n1. Install Luminary using the instructions above\n2. Download an example profile from our [profile-examples](profile-examples) directory\n3. Launch Luminary and go to \"Manage Profiles\"\n4. Import the downloaded profile\n5. Explore the pre-configured assistants and functions that come with the profile\n\nEach example profile is designed to showcase different capabilities of Luminary. Check the [profile-examples/README.md](profile-examples/README.md) for detailed information about each example.\n\n---\n\n## Building\n\n\u003e **Prerequisite**  \n\u003e Ensure you have **Node.js 16+** installed.\n\n1. **Clone** this repo.\n2. Run `npm install`.\n\n### Development Mode\n```bash\nnpm run electron:dev\n```\n\u003e This concurrently starts the Angular dev server and launches Electron.  \n\u003e **Tip:** After it loads, press `Ctrl+R` to refresh once Angular finishes building.\n\n### Production Builds\n\n- **Windows**:\n  ```bash\n  npm run electron:build:win\n  ```\n  *Creates an installer in the `dist` folder.*\n\n- **macOS**:\n  ```bash\n  npm run electron:build:mac\n  ```\n  *Generates a `.dmg` in the `dist` folder.*  \n  *Requires Xcode signing for a fully notarized build.*\n\n---\n\n## Environment Setup (Optional)\n\n\u003e **Skip this unless you are changing the way the application builds** — the default scripts auto-generate these environment files.\n\n1. Run the setup script:\n   - macOS/Linux: `./scripts/setup-env.sh`\n   - Windows: `scripts\\setup-env.bat`\n2. Edit the generated files in `src/environments/` to add any environment variables you need.  \n\nThese files are `.gitignore`d to keep your data safe.\nThese scripts are run automatically when you run `npm run electron:dev` via `./scripts/setup-env.js`\n\n---\n\n## OpenAI Integration\n\nLuminary is built around the OpenAI Assistants API, providing a structured interface for creating and managing AI workflows. The platform currently focuses exclusively on OpenAI's technology to ensure consistency across applications.\n\n### OpenAI Requirements\n\n1. **API Key**: To use OpenAI assistants in Luminary, you need to have an OpenAI API key. You can obtain an API key from the OpenAI website. Once you have the key, you can set it in Luminary's settings.\n\n2. **API Usage Tier**: OpenAI has different pricing plans for API usage. Luminary supports using the API at any tier, but some models and features may be locked, and you may experience rate limit issues, depending on your usage tier. To get a higher usage tier, you need to add more funds to your OpenAI API Account. \n\n(The video pipeline shown in the example video was made with a single $50 deposit into a blank OpenAI Api account. This allowed for 450k TPM and the use of a 16k token gpt4o model)\n\n### Future AI Integration Plans\nWe are exploring integration with additional AI technologies to expand Luminary's capabilities:\n- Support for alternative language models\n- LangChain integration for enhanced flexibility\n- Extended model capabilities and custom model support\n\n---\n\n## Building a Pipeline\n\nCreating an AI pipeline in Luminary involves a step-by-step process using three main components: Object Schemas, AI Assistants, and the Graph Pipeline. Here's a guide to building a pipeline, using a video generation workflow as an example:\n\n### 1. Define Object Schemas\n\nStart by creating object schemas for all data types you'll pass between assistants:\n\n1. Open the Schema Editor UI or prepare JSON schema definitions.\n2. Create schemas for each data type. For our video generation example:\n   - Outline schema\n   - Script schema\n   - Video schema\n3. Define properties and validation rules for each schema.\n4. Save your schemas for use in assistant configurations.\n\n### 2. Configure AI Assistants\n\nNext, set up the assistants that will process your data:\n\n1. Create a new assistant in the Assistant Configuration interface.\n2. Write system instructions to define the assistant's role and behavior.\n3. Attach necessary tool functions, which may include your custom scripts.\n4. Specify input and output object schemas:\n   - For an outline generator: No input, Outline schema as output\n   - For a script writer: Outline schema as input, Script schema as output\n   - For a video creator: Script schema as input, Video schema as output\n5. If you don't select an output function, Luminary will create one automatically.\n6. Repeat this process for each assistant in your pipeline.\n\n### 3. Build the Graph Pipeline\n\nFinally, create the visual workflow using the graph editor:\n\n1. Open the Graph Editor canvas.\n2. Drag assistant nodes from the left library panel onto the canvas.\n3. Arrange your nodes in the desired workflow order.\n4. Connect nodes by dragging from an output dot to an input dot:\n   - Outline Generator output to Script Writer input\n   - Script Writer output to Video Creator input\n5. Ensure connections are between compatible object schema types.\n6. Luminary will validate connections to prevent type mismatches.\n7. Add any branching or conditional flows as needed.\n8. Test your pipeline using the debug tools provided.\n\nBy following these steps, you'll create a fully functional AI pipeline in Luminary, with data flowing seamlessly between your custom assistants.\n\n---\n\n## Profiles\n\nProfiles in Luminary organize and contain AI workflows. Each profile includes the components needed to run AI pipelines.\n\n### Profile Components\nA profile contains:\n- Pipeline configurations\n- Assistant definitions\n- Object schemas\n- Custom functions\n- Graph layouts\n\n### Storage and Management\nProfiles and related files are stored in a `.luminary` file in your User directory:\n- Windows: `%userprofile%\\.luminary`\n- macOS: `~/.luminary`\n\nLuminary is installed in the following directories:\n- Windows: `%AppData%/Luminary`\n- macOS: `~/Library/Application Support/Luminary`\n\n### Profile Operations\nThe settings menu allows you to:\n- Create new profiles\n- Import profiles from zip files\n- Export profiles\n- Switch between profiles\n\n### Compatibility\nProfiles work with:\n- Desktop application via zip files\n- Pipeline engine library\n- Version control systems\n\n### Example Profiles\nWe provide several example profiles in our [profile-examples](profile-examples) directory to help you get started:\n\n- **Video Pipeline Profile**: Demonstrates automated video content creation using GPT and Pictory.ai\n- More examples coming soon!\n\n---\n\n## Projects\n\nProjects help organize different aspects of your AI workflows within profiles.\n\n### Project Management\n- Sort generated content\n- Handle scheduled events\n- Track executions\n- Separate development and production work\n\n### Access and Control\nThrough the settings menu, you can:\n- Create and remove projects\n- Monitor resources\n- Set access controls\n- View activity logs\n\n---\n\n## Scheduling\n\nLuminary includes a scheduling system complete with a calendar view for easy scheduling.\nYou can schedule tasks to run at specific times; events can send new messages, or objects generated in the past.\n\n**Scheduling allows you to:**\n- Schedule tasks to run at specific times\n- Set up recurring tasks, weekly or monthly\n- Monitor task executions\n\n**Example usage:**\n- Generate a video, object is saved\n- Schedule object to be passed to the Upload To Youtube function/assistant in one week.\n\n---\n\n## Functions\n\nFunctions allow custom code execution in your AI pipelines, enabling integration with external systems and data processing. You can bring your own code for each function you create. Any language is supported as long as the script can be executed by a terminal command.\n\n**What is an AI Tool function?**\nAI Tool Functions = Actions performed by the AI\n\nTool functions allow assistants to execute code. These functions can be used to perform tasks like data processing, data analysis, or data transformation. The code that is executed is code you write, or import. *The code does not run in OpenAI's servers*. Instead it runs within Luminary on your **local machine**. \n\nIn order for the AI to call your function you must setup a **tool definition**. Luminary helps you do this in the assistant editor. A **tool definition** is simply a JSON object that defines the input and output schemas for your function. This is what tells the AI how to call your function, what parameters are available, and what the function is used for.\n\n### Integration Methods\nFunctions can be used as:\n1. **Assistant Tools**: Code that assistants can execute\n2. **Standalone Nodes**: Independent processing steps in your pipeline\n\n### Output Functions\nAssistant tool functions have the option to be marked isOuptut. When this is set to true, the function will be used as the output function for the assistant. If no output function is set, Luminary will create one automatically, based on the outputs defined in the assistant's tool definition.\n\n### Media Management\nObject Schema fields can be marked as isMedia. When this is set to true, the field will be used to store media files. isMedia fields are automatically downloaded when outputted from AI assistants or assistant tool functions.\n\n### Function Development\nTo create a custom function in Luminary:\n\n1. Write your function code:\n   Choose any language that can be executed via command line. Here are examples in JavaScript and Python:\n\n   **JavaScript Example (function.js):**\n   ```javascript\n   #!/usr/bin/env node\n\n   function finalOutput(output) {\n     console.log('$%*%$Output:' + JSON.stringify(output));\n   }\n\n   async function main() {\n     try {\n       const inputs = JSON.parse(await new Promise(resolve =\u003e process.stdin.once('data', resolve)));\n       \n       const { title, content } = inputs;\n       \n       // Your function logic here\n       const processedContent = content.toUpperCase();\n       \n       const result = {\n         title: title,\n         processedContent: processedContent\n       };\n       \n       finalOutput(result);\n     } catch (error) {\n       console.error(JSON.stringify({ error: error.message }));\n       process.exit(1);\n     }\n   }\n\n   main();\n   ```\n\n   **Python Example (function.py):**\n   ```python\n   #!/usr/bin/env python3\n   import sys\n   import json\n\n   def final_output(output):\n       print('$%*%$Output:' + json.dumps(output))\n\n   try:\n       inputs = json.loads(sys.stdin.read())\n       \n       title = inputs['title']\n       content = inputs['content']\n       \n       # Your function logic here\n       processed_content = content.upper()\n       \n       result = {\n           'title': title,\n           'processedContent': processed_content\n       }\n       \n       final_output(result)\n   except Exception as e:\n       print(json.dumps({'error': str(e)}), file=sys.stderr)\n       sys.exit(1)\n   ```\n\n2. Set up the function in Luminary:\n   - Script File: Point to your script file (e.g., `function.js` or `function.py`).\n   - Execution Command: \n     - For JavaScript: `node function.js`\n     - For Python: `python function.py`\n   - Define Input/Output Schemas:\n     Input Schema:\n     ```json\n     {\n       \"type\": \"object\",\n       \"properties\": {\n         \"title\": { \"type\": \"string\" },\n         \"content\": { \"type\": \"string\" }\n       },\n       \"required\": [\"title\", \"content\"]\n     }\n     ```\n     Output Schema:\n     ```json\n     {\n       \"type\": \"object\",\n       \"properties\": {\n         \"title\": { \"type\": \"string\" },\n         \"processedContent\": { \"type\": \"string\" }\n       },\n       \"required\": [\"title\", \"processedContent\"]\n     }\n     ```\n\n3. Important Factors:\n   - Use `$%*%$Output:` prefix for final output in both languages.\n   - Handle errors and output them as JSON to stderr.\n   - Parse input from stdin as JSON.\n   - Ensure your script has proper execute permissions (chmod +x for Unix-like systems).\n\n4. Testing:\n   Use Luminary's built-in tools to test your function:\n   - Provide sample inputs matching your input schema.\n   - Verify the output matches your output schema.\n   - Test error scenarios to ensure proper error handling.\n\nBy following these steps, you can create custom functions that seamlessly integrate with Luminary's AI pipelines, allowing for powerful data processing and external system interactions.\n\n### Runtime Communication\nFunctions use standard streams:\n- **Input**: JSON via stdin\n- **Status**: Updates via stdout\n- **Results**: Use '$%*%$Output:' prefix with JSON\n- **Errors**: Standard stderr handling\n\n### Output Formats\nFunctions can output:\n- Single schema objects\n- Multiple objects by schema\n- Object arrays\n\n#### Single Schema Output Example\n```javascript\nconsole.log('$%*%$Output:' + JSON.stringify({\n  title: \"My Processed Content\",\n  processedContent: \"THIS IS THE UPPERCASE CONTENT\"\n}));\n```\n\n#### Multiple Schema Objects Output Example\nSchema names for this example are: Video, Pictory Request, Pictory Render, Pictory Job\n```javascript\nconsole.log('$%*%$Output:' + JSON.stringify({\n  video: { id: \"123\", url: \"https://example.com/video.mp4\" },\n  pictoryRequest: { content: \"Original content here\" },\n  pictoryRender: { status: \"complete\", progress: 100 },\n  pictoryJob: { id: \"job123\", status: \"finished\" }\n}));\n```\n\n#### Array Output Example\n```javascript \nconsole.log('$%*%$Output:' + JSON.stringify({\n  processedItems: [\n    { id: 1, content: \"FIRST ITEM\" },\n    { id: 2, content: \"SECOND ITEM\" },\n    { id: 3, content: \"THIRD ITEM\" }\n  ]\n}));\n```\n\n### Templates\nThe `/functions/` directory includes:\n- JavaScript and Python examples\n- Input/output patterns\n- Status update examples\n- Error handling templates\n\n---\n\n## Roadmap\n\nCurrent development plans include:\n\n### Upcoming Features\n- **Event Scheduling**: Automated pipeline execution (done)\n- **LangChain Integration**: Additional AI model support\n- **AI Building Tools**: Simplified assistant setup\n- **Development Tools**: Enhanced debugging and code support\n\n### Future Plans\n- More model options\n- Pipeline templates\n- Team development features\n- Enterprise tools\n\n---\n\n## Community\n\nCreated by JakeDoesDev with support from developers and AI enthusiasts.\n\nJoin our Discord for:\n- Technical help\n- Feature discussions\n- Resources\n- Updates\n\n[![Discord](https://img.shields.io/discord/1113687849674745896?label=JakeDoesDev%20Discord\u0026style=for-the-badge)](https://discord.gg/6BJTUpDSsE)\n\n---\n\n## Credits\n\nLuminary was created by Jake of JakeDoesDev.com\n\nThis project builds on the [GPT Assistant UI by PaulWeinsberg](https://github.com/PaulWeinsberg/gpt-assistant-ui/), extending it into a development platform while maintaining open-source principles.\n\nMade with [Windsurf](https://codeium.com/windsurf)\n\n---\n\n## License\n\nMIT License. See LICENSE.md for details.\n\n---\n\n\u003e **Thanks for checking out Luminary!**  \n\u003e If you find it valuable, [drop a star](https://github.com/nascarjake/luminary) and help grow our community. Enjoy building your AI pipelines!","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fnascarjake%2Fluminary","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fnascarjake%2Fluminary","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fnascarjake%2Fluminary/lists"}