{"id":19746332,"url":"https://github.com/symisc/tiny-dream","last_synced_at":"2025-04-09T15:10:00.418Z","repository":{"id":178368710,"uuid":"651324251","full_name":"symisc/tiny-dream","owner":"symisc","description":"Tiny Dream - An embedded, Header Only, Stable Diffusion C++ implementation","archived":false,"fork":false,"pushed_at":"2023-10-31T15:36:40.000Z","size":137,"stargazers_count":260,"open_issues_count":1,"forks_count":11,"subscribers_count":15,"default_branch":"main","last_synced_at":"2025-04-09T15:09:54.619Z","etag":null,"topics":["ai","cpp","cpp-library","embedded","generative-art","header-only","image-generation","latent-diffusion","library","machine-learning","stable-diffusion","text2image","txt2img","txt2img-generation"],"latest_commit_sha":null,"homepage":"https://pixlab.io/tiny-dream","language":"C","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/symisc.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-06-09T02:11:32.000Z","updated_at":"2025-04-05T19:58:12.000Z","dependencies_parsed_at":null,"dependency_job_id":"413571e4-c4eb-47f7-ac08-bc2f78114658","html_url":"https://github.com/symisc/tiny-dream","commit_stats":null,"previous_names":["symisc/tiny-dream"],"tags_count":1,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/symisc%2Ftiny-dream","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/symisc%2Ftiny-dream/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/symisc%2Ftiny-dream/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/symisc%2Ftiny-dream/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/symisc","download_url":"https://codeload.github.com/symisc/tiny-dream/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248055282,"owners_count":21040157,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","cpp","cpp-library","embedded","generative-art","header-only","image-generation","latent-diffusion","library","machine-learning","stable-diffusion","text2image","txt2img","txt2img-generation"],"created_at":"2024-11-12T02:14:13.199Z","updated_at":"2025-04-09T15:10:00.399Z","avatar_url":"https://github.com/symisc.png","language":"C","readme":"\u003ch1 align=\"center\"\u003eTINY DREAM\u003cbr\u003e\u003cbr\u003eAn embedded, Header Only, Stable Diffusion Inference C++ Library\u003cbr\u003e\u003ca href=\"https://pixlab.io/tiny-dream\"\u003epixlab.io/tiny-dream\u003c/a\u003e\u003c/h1\u003e\n\n![td_screen_website](https://github.com/symisc/tiny-dream/assets/4615920/b4e9f6b3-4019-4d48-9e3e-879a071213a5)\n\n\u003ch5\u003e\u003cem\u003eLatest News\u003c/em\u003e 🔥\u003c/h5\u003e\n\u003cul\u003e\n\t\u003cli\u003e\u003cstrong\u003eTiny Dream 1.7.5 \u003ca href=\"https://github.com/symisc/tiny-dream/releases/tag/1.7.5\"\u003eReleased\u003c/a\u003e - \u003ca href=\"https://pixlab.io/tiny-dream\"\u003eGet Started\u003c/a\u003e\u003c/strong\u003e.\u003c/li\u003e\n\u003c/ul\u003e\n\n[![API documentation](https://img.shields.io/badge/API%20documentation-Ready-green.svg)](https://pixlab.io/tiny-dream)\n[![dependency](https://img.shields.io/badge/dependency-none-ff96b4.svg)](https://pixlab.io/tiny-dream#downloads)\n[![license](https://img.shields.io/badge/License-dual--licensed-blue.svg)](https://pixlab.io/tiny-dream#license)\n\n* [Introduction](#tiny-dream)\n* [Features](#td-features)\n* [Getting Started](#td-start)\n* [Downloads](https://pixlab.io/tiny-dream#downloads)\n* [Project Roadmap](#roadmap)\n* [License](https://pixlab.io/tiny-dream#license)\n* [C++ API Reference Guide](https://pixlab.io/tiny-dream#cpp-api)\n* [Issues Tracker](https://github.com/symisc/tiny-dream/issues)\n* [Related Projects](#td-projects)\n\n\u003ch2 id=\"tiny-dream\"\u003eIntroducing PixLab's Tiny Dream\u003c/h2\u003e\n\u003cp\u003e\u003ca href=\"https://pixlab.io/tiny-dream\" target=\"_blank\"\u003eTiny Dream\u003c/a\u003e is a header only, dependency free, \u003cstrong\u003epartially uncensored, Stable Diffusion implementation written in C++\u003c/strong\u003e with primary focus on CPU efficiency, and smaller memory footprint. \u003cstrong\u003eTiny Dream\u003c/strong\u003e runs reasonably \u003ca href=\"https://pixlab.io/tiny-dream#features\"\u003efast\u003c/a\u003e on the average consumer hardware, \u003ca href=\"https://pixlab.io/tiny-dream#features\"\u003erequire\u003c/a\u003e \u003cstrong\u003eonly 1.7 ~ 5.5 GB of RAM\u003c/strong\u003e to execute, does not enforce Nvidia GPUs presence, and \u003cstrong\u003eis designed to be \u003ca href=\"https://pixlab.io/tiny-dream#getting-started\"\u003eembedded\u003c/a\u003e on larger codebases (host programs) with an easy to use \u003ca href=\"https://pixlab.io/tiny-dream#cpp-api\"\u003eC++ API\u003c/a\u003e\u003c/strong\u003e. The possibilities are literally endless, or at least extend to the boundaries of Stable Diffusion's latent manifold.\u003c/p\u003e\n\u003ch2 id=\"td-features\"\u003eFeatures 🔥\u003c/h2\u003e\n\u003cem\u003eFor the extensive list of features, please refer to the official documentation \u003ca href=\"https://pixlab.io/tiny-dream#features\" target=\"_blank\"\u003e\u003cstrong\u003ehere\u003c/strong\u003e\u003c/a\u003e.\u003c/em\u003e\n\u003cbr\u003e\u003cbr\u003e\n\u003cul\u003e\n  \u003cli\u003e\u003cstrong\u003eOpenCV Dependency Free\u003c/strong\u003e: Only \u003cfont face=\"courier\"\u003e\u003ca href=\"https://github.com/nothings/stb/blob/master/stb_image_write.h\" target=\"_blank\"\u003estb_image_write.h\u003c/a\u003e\u003c/font\u003e from the excellent \u003ca href=\"https://github.com/nothings/stb/\" target=\"_blank\"\u003estb \u003cem class=\"ti ti-new-window\"\u003e\u003c/em\u003e\u003c/a\u003e single-header, public domain C library is required for saving images to disk.\u003c/li\u003e\n  \u003cli\u003e\u003cstrong\u003eSmallest, Run-Time \u003ca href=\"https://pixlab.io/tiny-dream#features\" target=\"_blank\"\u003eMemory Footprint\u003c/a\u003e for Running Stable Diffusion in Inference\u003c/strong\u003e.\u003c/li\u003e\n  \u003cli\u003e\u003cstrong\u003eStraightforward to \u003ca href=\"https://pixlab.io/tiny-dream#getting-started\" target=\"_blank\"\u003eIntegrate on Existing Codebases\u003c/strong\u003e: Just drop \u003cfont face=\"courier\"\u003e\u003cem\u003etinydream.hpp\u003c/em\u003e\u003c/font\u003e and \u003cfont face=\"courier\"\u003e\u003cem\u003estb_image_write.h\u003c/em\u003e\u003c/font\u003e on your source tree with the \u003ca href=\"https://pixlab.io/tiny-dream#downloads\"\u003e\u003cstrong\u003ePre-trained Models \u0026 Assets\u003c/strong\u003e\u003c/a\u003e.\u003c/li\u003e\n    \u003cli\u003e\u003cstrong\u003eReasonably fast on Intel/AMD CPUs (\u003ca href=\"https://pixlab.io/tiny-dream#bench\"\u003eBenchmarks\u003c/a\u003e)\u003c/strong\u003e: With TBB threading and SSE/AVX vectorization.\u003c/li\u003e\n    \u003cli\u003e\u003cstrong\u003eSupport \u003ca href=\"https://github.com/xinntao/Real-ESRGAN\" target=\"_blank\"\u003eReal-ESRGAN\u003c/a\u003e, A Super Resolution Network Upscaler\u003c/strong\u003e.\u003c/li\u003e\n    \u003cli\u003e\u003cstrong\u003eFull Support for Words Priority\u003c/strong\u003e: Instruct the model to pay attention, and \u003cstrong\u003egive higher priority\u003c/strong\u003e to word (\u003cem\u003ekeywords\u003c/em\u003e) surrounded by parenthesis \u003cem\u003e\u003cstrong\u003e()\u003c/strong\u003e\u003c/em\u003e.\u003c/li\u003e\n    \u003cli\u003e\u003cstrong\u003eSupport for Output Metadata\u003c/strong\u003e: Link meta information to your output images such as \u003cem\u003ecopyright notice\u003c/em\u003e, \u003cem\u003ecomments\u003c/em\u003e, or any other meta data you would like to see linked to your image.\u003c/li\u003e\n    \u003cli\u003e\u003cstrong\u003eSupport for Stable Diffusion Extra Parameters\u003c/strong\u003e: Adjust \u003ca href=\"https://pixlab.io/tiny-dream#tiny-dream-method\"\u003eSeed resizing\u003c/a\u003e \u0026 \u003ca href=\"https://pixlab.io/tiny-dream#tiny-dream-method\"\u003eGuidance Scale\u003c/a\u003e.\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch2 id=\"td-start\"\u003eGetting Started with Tiny-Dream 🔥\u003c/h2\u003e\n\u003cp\u003e\u003cstrong\u003eIntegrating Tiny Dream on your existing code base is straightforward\u003c/strong\u003e. Here is what to do without having to do a lot of tedious reading and configuration:\u003c/p\u003e\n\u003ch4\u003eDownload Tiny-Dream\u003c/h4\u003e\n\u003cul\u003e\n  \u003cli\u003e\u003ca href=\"https://github.com/symisc/tiny-dream/releases\"\u003eDownload\u003c/a\u003e the latest public release of Tiny Dream, and extract the package on a directory of your choice.\u003c/li\u003e\n  \u003cli\u003eRefer to the \u003ca href=\"https://pixlab.io/tiny-dream#downloads\"\u003edownloads section\u003c/a\u003e to get a copy of the Tiny Dream source code as well as the \u003cstrong\u003ePre-Trained Models \u0026 Assets\u003c/strong\u003e.\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch4\u003eEmbedding Tiny-Dream\u003c/h4\u003e\n    \u003cul\u003e\n      \u003cli\u003eThe Tiny Dream source code \u003ca href=\"https://pixlab.io/tiny-dream#downloads\"\u003ecomprise\u003c/a\u003e \u003cstrong\u003eonly two header files\u003c/strong\u003e that is \u003cfont face=\"courier\"\u003e\u003cstrong\u003etinydream.hpp\u003c/strong\u003e\u003c/font\u003e and \u003cfont face=\"courier\"\u003e\u003cstrong\u003estb_image_write.h\u003c/strong\u003e\u003c/font\u003e.\u003c/li\u003e\n      \u003cli\u003eAll you have to do is drop these two C/C++ header files on your source tree, and \u003ca href=\"https://pixlab.io/tiny-dream#tiny-dream-constructor\"\u003einstantiate\u003c/a\u003e a new \u003cfont face=\"courier\"\u003etinyDream\u003c/font\u003e object as shown on the pseudo C++ code below:\u003c/li\u003e\n    \u003c/ul\u003e\n    \n```\n#include \"tinydream.hpp\"\n/*\n* Main Entry Point. The only required argument is the Positive Prompt.\n* Passing a Negative Prompt (words separated by commas) is highly recommended though.\n* \n* We recommend that you experiment with different seed \u0026 step values\n* in order to achieve a desirable result.\n* \n* ./tinydream \"positive prompt\" [\"negative prompt\"] [seed] [step]\n*/\nint main(int argc, char *argv[]) \n{\n\ttinyDream td; // stack allocated tinyDream object\n\n\t// Display the library current inference engine, version number, and copyright notice\n\tstd::cout \u003c\u003c tinyDream::about() \u003c\u003c std::endl;\n\t\n\t// At least a positive prompt must be supplied via command line\n\tif (argc \u003c 2) {\n\t\tstd::cout \u003c\u003c \"Missing Positive (and potentially Negative) Prompt: Describe something you'd like to see generated...\" \u003c\u003c std::endl;\n\t\tstd::cout \u003c\u003c \"Example of Prompts:\" \u003c\u003c std::endl;\n\t\t// Example of built-in Positive/Negative Prompts\n\t\tauto prompts = tinyDream::promptExample();\n\t\tstd::cout \u003c\u003c \"\\tPositive Prompt: \" \u003c\u003c prompts.first \u003c\u003c std::endl;\n\t\tstd::cout \u003c\u003c \"\\tNegative Prompt: \" \u003c\u003c prompts.second \u003c\u003c std::endl;\n\t\treturn -1;\n\t}\n\n\t// Register a log handler callback responsible of \n\t// consuming log messages generated during inference.\n\ttd.setLogCallback(logCallback, nullptr);\n\t\n\t// Optionally, set the assets path if the pre-trained models\n\t// are not extracted on the same directory as your executable\n\t// The Tiny-Dream assets can be downloaded from: https://pixlab.io/tiny-dream#downloads\n\ttd.setAssetsPath(\"/path/to/tinydream/assets\"); // Remove or comment this if your assets are located on the same directory as your executable\n\t\n\t// Optionally, set a prefix of your choice to each freshly generated image name\n\ttd.setImageOutputPrefix(\"tinydream-\");\n\t\n\t// Optionally, set the directory where you want\n\t// the generated images to be stored\n\ttd.setImageOutputPath(\"/home/photos/\");\n\t\n\tint seedMax = 90;\n\tif (argc \u003e 3) {\n\t\t/*\n\t\t* Seed in Stable Diffusion is a number used to initialize the generation. \n\t\t* Controlling the seed can help you generate reproducible images, experiment\n\t\t* with other parameters, or prompt variations.\n\t\t*/\n\t\tseedMax = std::atoi(argv[3]);\n\t}\n\tint step = 30;\n\tif (argc \u003e 4) {\n\t\t/*\n\t\t* adjusting the inference steps in Stable Diffusion: The more steps you use,\n\t\t* the better quality you'll achieve but you shouldn't set steps as high\n\t\t* as possible. Around 30 sampling steps (default value) are usually enough\n\t\t* to achieve high-quality images.\n\t\t*/\n\t\tstep = std::atoi(argv[4]);\n\t}\n\n\t/*\n\t* User Supplied Prompts - Generate an image that matches the input criteria.\n\t* \n\t* Positive Prompt (required): Describe something you'd like to see generated (comma separated words).\n\t* Negative Prompt (optional): Describe something you don't like to see generated (comma separated words).\n\t*/\n\tstd::string positivePrompt{ argv[1] };\n\tstd::string negativePrompt{ \"\" };\n\tif (argc \u003e 2) {\n\t\tnegativePrompt = std::string{ argv[2] };\n\t}\n\n\t/*\n\t* Finally, run Stable Diffusion in inference\n\t* \n\t* The supplied log consumer callback registered previously should shortly receive\n\t* all generated log messages (including errors if any) during inference.\n\t* \n\t* Refer to the official documentation at: https://pixlab.io/tiny-dream#tiny-dream-method\n\t* for the expected parameters the tinyDream::dream() method takes.\n\t*/\n\tfor (int seed = 1; seed \u003c seedMax; seed++) {\n\t\tstd::string outputImagePath;\n\n\t\ttd.dream(\n\t\t\tpositivePrompt, \n\t\t\tnegativePrompt, \n\t\t\toutputImagePath, \n\t\t\ttrue, /* Set to false if you want 512x512 pixels output instead of 2048x2048 output */\n\t\t\tseed,\n\t\t\tstep\n\t\t);\n\n\t\t// You do not need to display the generated image path manually each time via std::cout\n\t\t// as the supplied log callback should have already done that.\n\t\tstd::cout \u003c\u003c \"Output Image location: \" \u003c\u003c outputImagePath \u003c\u003c std::endl; // uncomment this if too intrusive\n\t}\n\treturn 0;\n}\n```\n\u003ch4\u003eLearn the Fundamentals (C++ API)\u003c/h4\u003e\n\u003cul\u003e\n\t\u003cli\u003eThe above code should be self-explanatory, and easy to understand for the average C++ programmer. The \u003cstrong\u003efull C++ integration code\u003c/strong\u003e for a typical application embedding Tiny Dream is located at: \u003ca href=\"https://pixlab.io/tiny-dream#code-gist\"\u003epixlab.io/tiny-dream#code-gist\u003c/a\u003e.\u003c/li\u003e\n\t\u003cli\u003eAs of this release, the library exposes a single class named \u003ccode\u003etinyDream\u003c/code\u003e with the following exported methods:\n\t\t\u003cul\u003e\n\t\t\t\u003cli\u003e\u003ca href=\"https://pixlab.io/tiny-dream#tiny-dream-constructor\"\u003etinyDream::tinyDream()\u003c/a\u003e - \u003cem\u003eConstructor\u003c/em\u003e\u003c/li\u003e\n                        \u003cli\u003e\u003ca href=\"https://pixlab.io/tiny-dream#tiny-dream-method\"\u003e\u003cstrong\u003etinyDream::dream()\u003c/strong\u003e\u003c/a\u003e - \u003cem\u003eStable Diffusion Inference\u003c/em\u003e\u003c/li\u003e\n                        \u003cli\u003e\u003ca href=\"https://pixlab.io/tiny-dream#set-img-output-method\"\u003etinyDream::setImageOutputPath()\u003c/a\u003e\u003c/li\u003e\n                        \u003cli\u003e\u003ca href=\"https://pixlab.io/tiny-dream#set-img-output-prefix\"\u003etinyDream::setImageOutputPrefix()\u003c/a\u003e\u003c/li\u003e\n\t\t\t\u003cli\u003e\u003ca href=\"https://pixlab.io/tiny-dream#set-log-callback\"\u003etinyDream::setLogCallback()\u003c/a\u003e\u003c/li\u003e\n                        \u003cli\u003e\u003ca href=\"https://pixlab.io/tiny-dream#set-assets-path-method\"\u003etinyDream::setAssetsPath()\u003c/a\u003e\u003c/li\u003e\n                        \u003cli\u003e\u003ca href=\"https://pixlab.io/tiny-dream#prompt-example-method\"\u003etinyDream::promptExample()\u003c/a\u003e\u003c/li\u003e\n                        \u003cli\u003e\u003ca href=\"https://pixlab.io/tiny-dream#about-method\"\u003etinyDream::about()\u003c/a\u003e\u003c/li\u003e\n\t\t\u003c/ul\u003e\n\t\u003c/li\u003e\n\t\u003cli\u003e\u003cstrong\u003eA step-by-step, detailed integration guide, and call logic of the above methods is located at: \u003ca href=\"https://pixlab.io/tiny-dream#step-by-step-cpp\"\u003epixlab.io/tiny-dream#step-by-step-cpp\u003c/a\u003e\u003c/strong\u003e.\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch4\u003eBuilding Tiny-Dream\u003c/h4\u003e\n\u003cul\u003e\n\t\u003cli\u003eBuilding Tiny-Dream from source require a modern C++17 compiler such as GCC 7 or later, Clang or Microsoft Visual Studio (MSVC).\u003c/li\u003e\n\t\u003cli\u003eYou also \u003cstrong\u003eneed to link to the default backend Tensor library\u003c/strong\u003e in order to generate the executable.\u003c/li\u003e\n\t\u003cli\u003eAs of this release, \u003ca href=\"https://github.com/Tencent/ncnn/wiki/how-to-build\" target=\"_blank\"\u003eNCNN \u003cem class=\"ti ti-new-window\"\u003e\u003c/em\u003e\u003c/a\u003e is the default tensor library. On our \u003ca href=\"#roadmap\"\u003eRoadmap\u003c/a\u003e, we plan to ditch \u003cfont face=\"courier\"\u003encnn\u003c/font\u003e to a less bloated tensor library such as \u003ca href=\"https://sod.pixlab.io\" target=\"_blank\"\u003eSOD\u003c/a\u003e or \u003ca href=\"https://github.com/ggerganov/ggml \" target=\"_blank\"\u003eGGML\u003c/a\u003e with focus on CPU efficiency\u003c/strong\u003e.\u003c/li\u003e\n\t\u003cli\u003eAlternatively, you can rely on a build manager such as CMAKE to build the executable for you. The Tiny-Dream repository repository already contain the necessarily CMAKE template to build the executable from source.\u003c/li\u003e\n\t\u003cli\u003eAn example of generating a heavy optimized executable without relying on a external build manager is shown just below:\u003c/li\u003e\n\u003c/ul\u003e\n\n```\ngit clone https://github.com/symisc/tiny-dream.git\ncd tiny-dream\ng++ -o tinydream boilerplate.cpp -funsafe-math-optimizations -Ofast -flto=auto  -funroll-all-loops -pipe -march=native -std=c++17 -Wall -Wextra `pkg-config --cflags --libs ncnn` -lstdc++ -pthread -Wl -flto -fopt-info-vec-optimized\n./tinydream \"pyramid, desert, palm trees, river, (landscape), (high quality)\"\n```\n\u003ch4\u003eGet the Pre-Trained Models \u0026 Assets\u003c/h4\u003e\n\u003cul\u003e\n\t\u003cli\u003eOnce your executable built, \u003cstrong\u003eyou will need the Tiny Dream \u003ca href=\"https://pixlab.io/tiny-dream#downloads\"\u003ePre-Trained Models \u0026 Assets\u003c/a\u003e path accessible to your executable\u003c/strong\u003e.\u003c/li\u003e\n\t\u003cli\u003eThe Tiny Dream assets comprise all pre-trained models (\u003cstrong\u003eover 2GB as of this release\u003c/strong\u003e) required by the \u003ca href=\"https://pixlab.io/tiny-dream#tiny-dream-method\"\u003e\u003cfont face=\"courier\"\u003etinyDream::dream()\u003c/font\u003e\u003c/a\u003e method in order to run stable diffusion in inference.\u003c/li\u003e\n\t\u003cli\u003eYou can download the pre-trained models from the \u003ca href=\"https://pixlab.io/tiny-dream#downloads\"\u003eDownload\u003c/a\u003e section on the \u003ca href=\"https://pixlab.io/\"\u003ePixLab\u003c/a\u003e website.\u003c/li\u003e\n\t\u003cli\u003eOnce downloaded, extract the assets ZIP archive in a directory of your choice (usually the directory where your executable is located), and set the full path via \u003cfont face=\"courier\"\u003e\u003ca href=\"https://pixlab.io/tiny-dream#set-assets-path-method\"\u003etinyDream::setAssetsPath()\u003c/a\u003e\u003c/font\u003e or from the Tiny Dream \u003ca href=\"https://pixlab.io/tiny-dream#tiny-dream-constructor\"\u003econstructor\u003c/a\u003e.\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch4\u003eContinue with The C++ API Reference Guide\u003c/h4\u003e\n\u003cp\u003eThe Tiny Dream \u003ca href=\"https://pixlab.io/tiny-dream#cpp-api\"\u003eC++ Interface\u003c/a\u003e, provides detailed specifications for all of the various methods the Tiny Dream class exports. Once the reader understands the basic principles of operation for \u003cstrong\u003eTiny Dream\u003c/strong\u003e, that \u003ca href=\"https://pixlab.io/tiny-dream##cpp-api\"\u003edocument\u003c/a\u003e should serve as a reference guide.\u003c/p\u003e\n\u003ch2 id=\"roadmap\"\u003eTODOs \u0026 Roadmap 🔥\u003c/h2\u003e\n\u003cp\u003eAs we continue to develop and improve Tiny Dream, we have an exciting roadmap of future addons and enhancements planned. Refer to the Roadmap page at \u003ca href=\"https://pixlab.io/tiny-dream#roadmap\"\u003epixlab.io/tiny-dream\u003c/a\u003e or the \u003ca href=\"https://blog.pixlab.io\"\u003ePixLab Blog\u003c/a\u003e for the exhaustive list of todos \u0026 ongoing progress...\u003c/p\u003e\n\u003cul\u003e\n\t\u003cli\u003e\u003cstrong\u003eMove the Tensor library to a non bloated one such as \u003ca href=\"https://sod.pixlab.io/\"\u003eSOD\u003c/a\u003e or \u003ca href=\"https://github.com/ggerganov/ggml\"\u003eGGML\u003c/a\u003e with focus on CPU performance\u003c/strong\u003e.\u003c/li\u003e\n\t\u003cli\u003e\u003cstrong\u003eProvide a Cross-Platform GUI to Tiny Dream implemented in \u003ca href=\"https://github.com/ocornut/imgui\"\u003eDear imGUI\u003c/a\u003e\u003c/strong\u003e.\u003c/li\u003e\n\t\u003cli\u003eProvide a Web-Assembly port to the library once the future Tensor library (SOD or GGML) ported to WASM.\u003c/li\u003e\n\t\u003cli\u003eOutput SVG, and easy to alter formats (potentially PSD) rather than static PNGs.\u003c/li\u003e\n\t\u003cli\u003e Provide an Android, proof of concept, show-case APK.\u003c/li\u003e\n\u003c/ul\u003e\n\u003ch2\u003eOfficial Docs \u0026 Resources\u003c/h2\u003e\n\u003ctable class=\"table\"\u003e\n    \u003ctbody\u003e\n      \u003ctr\u003e\n        \u003cth\u003e\u003ca href=\"https://pixlab.io/tiny-dream#downloads\"\u003ePre-Trained Models \u0026 Assets Downloads\u003c/a\u003e\u003c/th\u003e\n        \u003ctd\u003e\u003ca href=\"https://pixlab.io/tiny-dream#getting-started\"\u003eGetting Started Guide\u003c/a\u003e\u003c/th\u003e\n        \u003ctd\u003e\u003ca href=\"https://pixlab.io/tiny-dream#license\"\u003eLicensing\u003c/a\u003e\u003c/td\u003e\n        \u003ctd\u003e\u003ca href=\"https://pixlab.io/tiny-dream#cpp-api\"\u003eC++ API Reference Guide\u003c/a\u003e\u003c/td\u003e\n        \u003ctd\u003e\u003ca href=\"https://pixlab.io/tiny-dream#roadmap\"\u003eProject Roadmap\u003c/a\u003e\u003c/td\u003e\n        \u003ctd\u003e\u003ca href=\"https://pixlab.io/tiny-dream#features\"\u003eFeatures\u003c/a\u003e\u003c/td\u003e\n      \u003c/tr\u003e\n    \u003c/tbody\u003e\n  \u003c/table\u003e\n  \u003ch2 id=\"td-projects\"\u003eRelated Projects 🔥\u003c/h2\u003e\n  \u003cp\u003eYou may find useful the following production-ready projects developed \u0026 maintained by \u003ca href=\"https://pixlab.io\"\u003ePixLab\u003c/a\u003e | \u003ca href=\"https://symisc.net\"\u003eSymisc Systems\u003c/a\u003e:\u003c/p\u003e\n  \u003cul\u003e\n\t  \u003cli\u003e\u003ca href=\"https://sod.pixlab.io\"\u003eSOD\u003c/a\u003e - An Embedded, Dependency-Free, Computer Vision C/C++ Library.\u003c/li\u003e\n\t  \u003cli\u003e\u003ca href=\"https://faceio.net\"\u003eFACEIO\u003c/a\u003e - Cross Browser, Passwordless Facial Authentication Framework.\u003c/li\u003e\n\t  \u003cli\u003e\u003ca href=\"https://annotate.pixlab.io/\"\u003ePixLab Annotate\u003c/a\u003e - Online Image Annotation, Labeling \u0026 Segmentation Tool.\u003c/li\u003e\n\t  \u003cli\u003e\u003ca href=\"https://pixlab.io/art\"\u003eASCII Art\u003c/a\u003e - Real-Time ASCII Art Rendering C Library.\u003c/li\u003e\n\t  \u003cli\u003e\u003ca href=\"https://unqlite.org\"\u003eUnQLite\u003c/a\u003e - An Embedded, Transactional Key/Value Database Engine.\u003c/li\u003e\n  \u003c/ul\u003e\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsymisc%2Ftiny-dream","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fsymisc%2Ftiny-dream","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsymisc%2Ftiny-dream/lists"}