https://github.com/kwakubiney/sqlite-go
A naïve implementation of a hash-table indexed file storage engine.
https://github.com/kwakubiney/sqlite-go
database filesystem storage
Last synced: 4 months ago
JSON representation
A naïve implementation of a hash-table indexed file storage engine.
- Host: GitHub
- URL: https://github.com/kwakubiney/sqlite-go
- Owner: kwakubiney
- Created: 2022-04-05T16:27:21.000Z (about 3 years ago)
- Default Branch: master
- Last Pushed: 2023-09-03T17:09:58.000Z (almost 2 years ago)
- Last Synced: 2025-01-21T02:44:37.105Z (5 months ago)
- Topics: database, filesystem, storage
- Language: Go
- Homepage:
- Size: 1.16 MB
- Stars: 3
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# sqlite-go
A naïve implementation of a persistent disk database using Golang.# Demo

# How was it built?
1) Basically, I utilize an append only file named `db` and an index file named `index` which gets created when the DB is opened by running `go run main.go`
2) When inserts of rows are done, I maintain a `map` in memory to store the row `ID` as the key and the byte offset of the encoded row as the value for faster lookups during a `select`
3) Index map is flushed to disk after the app is exited and loaded back in memory when the DB is up running again.
# How was serialization done?
1) I encode strings using the format `::` along with their lengths (in `32` bits, `big endian` format) during a single insert.2) The encoded lengths help to seek through the file to read specific rows during a `select`
# Why?
To learn how disk based databases and indexes work.# How to use
This database supports `insert` and `select` of rows with fields `ID, Name, Email`# How to do an insert
`insert `
`example` : `insert 1 Kwaku [email protected]`
# How to do a select of a specific row
`select `
`example` : `select 1`
# How to do a select all rows
`select`
`example` : `select`
# Things I will do in the future
1) There is no control on how large the DB file can grow so compaction must be done to limit the growth.
2) Implement `~4KB` paging to make DB reads on disk faster and more efficient.
3) Concurrency and all the other cool stuff...when I understand them enough