Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/flickersoul/kaleidoscope
A Lexer Named Kaleidoscope
https://github.com/flickersoul/kaleidoscope
Last synced: about 1 month ago
JSON representation
A Lexer Named Kaleidoscope
- Host: GitHub
- URL: https://github.com/flickersoul/kaleidoscope
- Owner: FlickerSoul
- Created: 2023-11-30T03:52:01.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2024-01-08T22:27:24.000Z (12 months ago)
- Last Synced: 2024-01-08T23:35:00.631Z (12 months ago)
- Language: Swift
- Size: 153 KB
- Stars: 2
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Kaleidoscope
This is a lexer inspired by [logos](https://github.com/maciejhirsz/logos). It utilizes swift macros enable easy creation.
## Example
```swift
import Kaleidoscopelet lambda: (inout LexerMachine) -> Substring = { $0.slice }
@kaleidoscope(skip: " |\t|\n")
enum Tokens {
@token("not")
case Not@regex("very")
case Very@token("tokenizer")
case Tokenizer// you could feed a closure directly to `onMatch` but swift doesn't like it for some reason
// seems to be a compiler bug (https://github.com/apple/swift/issues/70322)
@regex("[a-zA-Z_][a-zA-Z1-9$_]*?", onMatch: lambda)
case Identifier(Substring)
}for token in Tokens.lexer(source: "not a very fast tokenizer").map({ try! $0.get() }) {
print(token)
}
```The output will be
```text
Not
Identifier("a")
Very
Identifier("fast")
Tokenizer
```## Idea
The project is provides three macros: `@kaleidoscope`, `regex`, and `token`, and they work together to generate conformance to `LexerProtocol` for the decorated enums. `regex` takes in a regex expression for matching and `token` takes a string for excat matching. In addition, they can take a `onMatch` callback and a `priority` integer. The callback has access to token string slice and can futher transform it to whatever type required by the enum case. The priority are calculated by from the expression by default. However, if two exprssions have the same weight, manual specification is required to resolve the conflict.
Internally, all regex expressions and token strings are converted into a single finite automata. The finite automata consumes one character from the input at a time, until it reaches an token match or an error. This machanism is simple but works slowly. Future improvements can be established on this issue.
## Note
This package uses an internal Swift package [`_RegexParser`](https://github.com/apple/swift-experimental-string-processing) included in the experimental string processing lib. Please check out the github packe for compatibility. Due to its being experimental, this library can break in the future.
## Furture Improvements
- [ ] faster tokenization optomization
- [ ] cleaner code generation
- [ ] cleaner interface