Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/akneni/truffle-cpp
Compile-time memory management without caveats
https://github.com/akneni/truffle-cpp
compiler llvm
Last synced: 29 days ago
JSON representation
Compile-time memory management without caveats
- Host: GitHub
- URL: https://github.com/akneni/truffle-cpp
- Owner: akneni
- License: mit
- Created: 2024-09-14T04:25:39.000Z (about 2 months ago)
- Default Branch: main
- Last Pushed: 2024-09-21T18:59:14.000Z (about 2 months ago)
- Last Synced: 2024-09-27T10:42:35.146Z (about 1 month ago)
- Topics: compiler, llvm
- Language: C++
- Homepage:
- Size: 1020 KB
- Stars: 1
- Watchers: 1
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
![Banner](./branding/truffle-banner.png)
# Truffle
**Compile-time memory management without caveats**Truffle is compiler engine that provides an easy way to manage memory at compile time without burdening the user with the unsaftey manual memory management or the complexity of an ownership model. Truffle is also a programming language; this language acts as a front end to the truffle compiler engine meant to showcase it's abilities.
---
## 🚧 Work In Progress
**Note:** Truffle is in a very early state and nearly everything is currently under active development.
Key areas currently being worked on:
- **LLVM Integration**: Leveraging LLVM's backend for highly efficient code generation.
- **AutoFree Functionality**: Implementing an automatic memory cleanup feature where the compiler determines when objects are no longer in use and frees memory without developer intervention. This feature aims to eliminate the need for manual deallocation while ensuring efficient memory use, offering the best of both worlds—high performance without the risk of memory leaks.
- **LLM Enabled Optimization**: This is a highly experimental feature. While LLVM optimizes LLVM IR, it does not focus extensively on optimizing the final machine code for specific target architectures. We are experimenting with using a Large Language Model (LLM) to optimize sections of the final machine code. This feature is inspired by a [research paper published by Meta](https://ai.meta.com/research/publications/meta-large-language-model-compiler-foundation-models-of-compiler-optimization/).