{"id":13580708,"url":"https://github.com/chrisdickinson/git-rs","last_synced_at":"2025-05-16T07:05:36.110Z","repository":{"id":64820914,"uuid":"162784871","full_name":"chrisdickinson/git-rs","owner":"chrisdickinson","description":"git, implemented in rust, for fun and education :crab:","archived":false,"fork":false,"pushed_at":"2023-08-02T22:44:43.000Z","size":218,"stargazers_count":1379,"open_issues_count":2,"forks_count":32,"subscribers_count":30,"default_branch":"main","last_synced_at":"2025-04-08T16:09:30.845Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Rust","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/chrisdickinson.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE.md","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null}},"created_at":"2018-12-22T05:40:07.000Z","updated_at":"2025-04-03T08:00:27.000Z","dependencies_parsed_at":"2024-01-07T21:04:49.732Z","dependency_job_id":"636f3e96-35bb-456c-91fb-d3c5a061a99d","html_url":"https://github.com/chrisdickinson/git-rs","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/chrisdickinson%2Fgit-rs","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/chrisdickinson%2Fgit-rs/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/chrisdickinson%2Fgit-rs/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/chrisdickinson%2Fgit-rs/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/chrisdickinson","download_url":"https://codeload.github.com/chrisdickinson/git-rs/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254485062,"owners_count":22078767,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-01T15:01:54.408Z","updated_at":"2025-05-16T07:05:35.072Z","avatar_url":"https://github.com/chrisdickinson.png","language":"Rust","readme":"# git-rs\n\nImplementing git in rust for fun and education!\n\nIf you're looking for a native Rust Git implementation ready for use in anger, you\nmight consider looking at [`gitoxide`](https://github.com/Byron/gitoxide) instead!\n\nThis is actually my second stab at it, so big blocks will land in place from my\nfirst attempt. I'm trying again this year after reading more of \"Programming\nRust\" (Blandy, Orendorff).\n\n## TODO\n\n- [x] Read objects from loose store\n- [x] Read objects from pack store\n    - [x] Read packfile indexes\n    - [x] Read delta'd objects\n    - [x] Fix interface so we don't need to run `open` for each `read()`\n    - [x] **BUG**: certain OFS deltas are misapplied.\n        - [x] Isolate the error case\n        - [x] Fix it\n- [x] Load refs off of disk\n- [x] Parse git signatures (\"Identity\"'s)\n- [x] Create iterator for walking commit graph\n- [x] Create iterator for walking trees\n    - [ ] Materialize trees to disk (post gitindex?)\n- [x] Create index from packfile\n    - [x] Rename `Storage` trait to `Queryable`\n    - [x] Rework object loading API from `\u003cType + Boxed reader\u003e` to \"we take a writable object\"\n        - [x] Carry the rework out through `StorageSet`\n    - [x] Create the index\n    - [x] Wrap it in a nice API\n- [ ] refs v2\n    - [ ] Load refs on demand\n    - [ ] Load packed-refs\n- [ ] `.git/index` support\n    - [ ] Read git index cache\n    - [ ] Write git index cache\n- [ ] Create interface for writing new objects\n- [ ] Add benchmarks\n- [ ] Code coverage\n- [ ] Create packfile from list of objects (API TKTK)\n- [ ] Network protocol\n    - [ ] receive-pack\n    - [ ] send-pack\n- [ ] Try publishing to crates\n    - [ ] Write documentation\n    - [ ] Use crate in another project\n\n* * *\n\n## PLAN\n\n### 2022-04-30 Update\n\n- I did a bit of optimizing and my (completely unscientific) benchmarking has us pretty close to native `git log`!\n- I'm currently measuring the performance of `git log --pretty=oneline \u003e/dev/null` vs. `git_rs_log \u003e/dev/null` against a local checkout of [`nodejs/node`](https://github.com/nodejs/node).\n    - This is on a M1 Pro Max Macbook Pro.\n    - `git_rs_log` started out at ~500ms for a complete walk of the repo history. Vanilla git was seeing ~280-300ms.\n- I'm used to using DTrace + flamegraphs to profile, but to my dismay using DTrace requires booting into recovery mode on modern macOS \u0026 disabling system integrity protection.\n    - at least, that was what my first investigation turned up. It looks like there [may be other options](https://poweruser.blog/using-dtrace-with-sip-enabled-3826a352e64b) I missed.\n- My coworker [Eric](https://twitter.com/evntdrvn) suggested using macOS's Instruments.app instead via [`cargo-instruments`](https://crates.io/crates/cargo-instruments) which worked a treat.\n- Running cargo instruments necessitated exposing a way to set the current working directory for `git_rs_log`, so I added `clap`.\n- I'm pleased to report we were able to bring `git_rs_log` down to 300-320ms for a full walk of node's history. Here's what I did:\n    - Switching deflate2 from it's `miniz_oxide` backend (the default, written in rust) to native zlib was the biggest boost\n    - Switching a `sort_by_key` to `sort_unstable_by_key` in packfile index reads was a small (5-7ms) win. (Packfile indices have a fanout table, a list of ids in ascending order by id value, and a list of offsets-in-the-packfile whose order corresponds to the ids. In order to use a packfile index to read from a packfile, though, you need to be able to read the offset for your incoming id request _and_ the next id in the packfile in offset order. Hence the `sort_by_key` call – once we've loaded the ids and offsets, we have to keep a mapping of position of id -\u003e position in packfile.)\n    - Tuning up the commit parser gave me another 10ms or so. Out of expediency, I had originally treated commits as HTTP transaction-like – newline-separated key/value headers followed by a double newline then the message. Now I actually store the well-known fields directly on the struct in parsed form. (There's still room for improvement here, too!)\n        - This required adding an `Id::from_ascii_bytes(\u0026[u8])` for hexadecimal-encoded ids\n            - Before this you'd have to bounce through `std::str::from_utf8` which validates that the vector contains valid utf8 before we validate that it only contains hexadecimal chars; now we can do both in one step.\n- I'm pretty happy with that performance (for now), so I'm looking for something to pick up next. Options include:\n    - Support for the worktree index file, `.git/index`. This is the start of the path for writing objects to the Git database.\n    - Better support for refs.\n    - Support for SHA256 object format. (Vanilla Git supports this now, so it'd be interesting to dig into how it works.)\n    - Another Rust project, for a change of pace.\n        - Postgres change data capture support, a la Debezium (but not tied to kafka.)\n        - A return to WASM text parsing (I have a private project called \"watto\" for this.)\n\n### 2022-04-27 Update\n\n- I'm back! I finally have some free time (and, maybe more importantly, _available attention_) so I'm revisiting this\n  project after a few years.\n- Most recently, I fixed a bug with the \"identity\" parser. \"Identities\" are the bits of commit and tag metadata that look\n  roughly like `Humanname \u003cemail\u003e 10100100100 +7300`.\n    - It turns out my parsing had a bug: it was dropping the last character of the email.\n    - This was surfaced to me -- not by tests as it should have been, to my embarrassment -- but by trying to run `git_rs_log`\n      against the Node repository. It turns out someone had committed without an email: `Foobar \u003c\u003e 1010020203 -4000`.\n    - My off-by-one error turned this into a panic, with the program safely -- if unexpectedly -- crashing on that input.\n    - I fixed the parser bug and golfed the parser itself down [using `match` statements and constants](https://github.com/chrisdickinson/git-rs/commit/a8fefd7db9817724b9202fac41cb9f4183229920).\n- While I was in that part of the code, I did a little editorializing: renaming `identity` to `human_metadata`.\n- I also took the oppportunity to _lazily_ parse the human metadata. There's no need to walk that entire bytestring unless\n  someone asks for it.\n    - It turns out that we _do_ ask for it during the course of `git log`: if we have multiple branches we need to load up\n      the commit metadata to compare timestamps, as the output order depends on commit timestamp.\n    - But for straightline chains of commits we don't need to load any of that up.\n        - This saves ~10-20 milliseconds on `git log` in the node repository.\n- Burying the lede: `git_rs_log` is about 100-200ms slower than `git log --pretty=oneline`, run against the node repository.\n    - Well, that certainly seems like a useful north star, does it not?\n    - Where are we spending time that git _isn't_?\n- Buoyed by my recent deep dive into the LLVM ecosystem, I briefly explored profile-guided optimization.\n    - I'm happy to report that I got a working setup and understood the results.\n    - I'm less happy to report that, well, the results weren't stunning. This kind of checks out: if the performance gap\n      is down to the number of I/O system calls we're making, assuming git makes fewer system calls that's where our perf\n      gap will be.\n    - So that's my current number one goal.\n- My number two goal is to modernize this repo and bring it up to the Rust\n  standards that I picked up from $dayjob (and in particular, via\n  [**@fishrock123**](https://github.com/fishrock123).)\n    - That means:\n        - implementing more standard traits on types,\n        - adding integration tests,\n        - adding type docs,\n        - and being a little bit more circumspect about what the crate exposes as a public API.\n\n### 2019-02-08 Update\n\n- **It's been a minute!**\n- As you might have seen, figuring out packfile indexing has forced a lot of changes on the repo.\n    - There's now a `src/pack/read.rs` file that holds generic read implementations for any `BufRead + Seek`.\n    - The signature of the `Storage` trait changed -- instead of returning a boxed read object, it now accepts\n      a `Write` destination.\n    - Further, `Storage` is now `Queryable` (a better name!).\n        - Because we moved from returning a Box to accepting generic `Write`, we could no longer box `Queryable`s.\n            - I didn't know this about Rust, so TIL!\n        - `StorageSet` objects had to be rethought as a result -- they could no longer contain Box'd `Storage` objects.\n            - Instead, we put the compiler to work -- because storage sets are known at compile time, I implemented\n              `Queryable` for the unit type, `()`, two types `(S, T)`, and arrays of single types `Vec\u003cT\u003e`.\n            - This means that a `StorageSet` may hold a single, top-level `Queryable`, which might contain nested\n              heterogenous `Queryable` definitions.\n                - It gives me warm, fuzzy feelings :revolving_hearts:\n- You might also note that we're _not actually done_ indexing packfiles. :scream:\n    - Here's the sitch: in order to create a packfile index, you have to run a CRC32 over the\n      **compressed** bytes in the packfile.\n    - The `ZlibDecoder` will pull more bytes from the underlying stream than it needs, so you can't take\n      the route of handing it a `CrcReader` and get good results.\n    - It's got to be a multi-pass deal.\n        - The current plan is: run one pass to get offsets, un-delta'd shas and types.\n        - Run a second pass to resolve CRCs and decompress deltas. This can be done in parallel.\n\n### 2019-01-23 Update\n\n- It's time to start indexing packfiles.\n    - This'll let us start talking to external servers and cloning things!\n- However, it's kind of a pain.\n    - Packfiles (viewed as a store) aren't hugely useful until you have an index, so\n      I had designed them as an object that takes an optional index from outside.\n        - My thinking was that if an index was not given, we would build one in-memory.\n    - That just blew up in my face, a little bit. :boom:\n    - In order to build an index from a packfile you have to iterate over all of the objects.\n        - For each object, you want to record the offset and the SHA1 id of the object at that offset.\n        - **However**, the object might be an offset or a reference delta.\n            - That means that in order to index a packfile, you've got to be able\n              to read delta'd objects at offsets within the packfile (implying you already\n              have the `Packfile` instance created) and outside of the packfile (\n              implying you have a `StorageSet`.)\n            - In other words: my assumptions about the program design are wrong.\n    - So, in the next day or so I'll be reversing course.\n        - It should be possible to produce a `Packfile` as a non-store object _and_\n          iterate over it.\n        - The \"store\" form of a packfile should be the combination of a `Packfile`\n          and an `Index` (a `PackfileStore`.)\n            - This means I'll be splitting the logic of `src/stores/mmap_pack` into\n              \"sequential packfile reads\" and \"random access packfile reads (with an index.)\"\n- It's fun to be wrong :tada:\n\n* * *\n\n### 2019-01-21 Update\n\n- Well, that was a fun bug. Let's walk through it, shall we?\n    - This _occasionally_ showed up when a delta would decode another delta'd object.\n        - I found a hash that would reliably fail to load.\n        - We'd fail the read because the incoming base object would not match the 2nd\n          delta's \"base size\". [Here][ref_10].\n        - Removing the check to see if I got the deltas wrong would cause the thread to\n          panic -- the delta's base size _wasn't a lie_.\n    - First, I switched back to my old mmap-less packfile implementation, because I recently\n      touched that code. \"Revert the thing you touched last\" is a winning strategy in these\n      cases: doesn't cost expensive thinking, quickly puts bounds around the bug.\n        - Alas, the old packfile implementation _also_ had this bug. **No dice.**\n    - I compared the output of this git implementation to my [JS implementation][ref_11].\n        - I confirmed that the output of the JS implementation worked by comparing its output\n          for the hash of concern to vanilla git.\n        - After confirming that, I logged out the offsets being read from the file and the expected\n          sizes. I compared this to similar debugging output I added to `git-rs`.\n            - The offsets are the bound values sent into [the packfile][ref_12]. For the outermost\n              read (\"Give me the object represented by `eff4c6de1bd15cb0d28581e4da17035dbf9e8596`\"),\n              the offsets come from the packfile index.\n            - For `OFS_DELTA` (\"offset delta\") types, the start offset is obtained by reading a [varint][ref_13]\n              from the packfile and subtracting it from the current start offset.\n        - The offsets and expected sizes matched!\n            - This meant that:\n                1. I was reading the correct data from the packfile\n                2. I was reading varints correctly\n                3. The bug must be in the application of the delta to a base object\n        - From there I added logging above [these state transitions][ref_14], noting the particulars of the\n          operation.\n            - I added the same logging to the JS implementation, and found that (aside from\n              totally bailing out ahead of the 2nd delta application) the commands were the same.\n            - So it wasn't even that my delta code was wrong: it was my `Read` state machine.\n    - At this point, I was like: \"This is a read state machine bug. [I know this][ref_15].\"\n        - So, one of the things this state machine does is carefully bail out if it\n          can't [write all of the bytes for a command][ref_16]. (\"If there remains an `extent` to write,\n          record the next state and break the loop.\")\n        - However, at this point we've already consumed the last command. There are no more instructions.\n        - So if this function were to be called again, ...\n            - We would politely (but firmly) [tell the caller][ref_17] to [buzz off][ref_18] (`written == 0`, here.).\n    - [The fix][ref_19] turned out to be simple, as these fixes usually are.\n        - (I need to write a test for this, I know. I know. Pile your shame on me.)\n- So [what did we learn][ref_20]?\n    - Always test your state machines, folks.\n    - (I've said it once, and I'm saying it again.) Malleable reference implementations will save your bacon.\n        - Make sure you can trust your reference implementation.\n- Anyway. The tree reader works now! :evergreen_tree::deciduous_tree::evergreen_tree:\n\n* * *\n\n### 2019-01-19 Update\n\n- It's slightly faster! :tada:\n    - mmap sped things along nicely, shaving 20ms off of our runtime.\n    - We're still reliably _slower_ than git, though. It might be because we load the refset\n      immediately.\n    - I kept the immutable \"file per read\" packfile store around; I think it may come in handy\n      in the future.\n    - It would be excellent to capture this as a benchmark instead of running it ad-hoc.\n- I integrated the tree walking iterator and got a nice surprise:\n    - There's a bug in my OFS delta code!\n    - This is interesting, because it only appears for certain blobs in certain repositories.\n        - Otherwise other OFS deltas seem to resolve cleanly.\n        - Case in point: many of the commits I load as a test of the commit walk-er are OFS-delta'd.\n    - Also of note: I've split from `src/bin.rs` into dedicated binaries for tree walking and commit walking.\n- Today's theme: isolate the bug in a test case.\n    - **EOD Update**: It's really helpful to have a reference implementation.\n    - I've confirmed that the reference implementation _can read_ the object that breaks this project.\n        - We are reading the same offsets, as well (phew)\n    - I've further confirmed that swapping out the packfile implementation for the older, slower packfile\n      doesn't affect anything.\n    - *I suspect* this means there's either a problem in my delta code (highly possible!), my varint decoding\n      code (*very* possible), or the Read implementation for Deltas. Yay, narrowed down results!\n\n* * *\n\n### 2019-01-15 Update\n\n- I added an (experimental) `git_rs::walk::tree` iterator to take a Tree and yield\n  a path + a blob for each item.\n    - It's probably slower than it should be: for each item it has to clone a `PathBuf`, because I couldn't work out the lifetimes.\n    - **If you know how to fix that**, please [open an issue][ref_8] and let me know :revolving_hearts:\n- I took some time to clean up the warnings during builds.\n    - Oh! I also installed [Clippy][ref_9] which warns about higher level antipatterns in Rust!\n- I'm still noodling over the **2-3x** slowdown between vanilla git and Our Git.\n    - I think I might create two packfile interfaces -- one \"generic\" and one \"mmap\"'d, to see if\n      one or the other makes up the difference in performance.\n        - This also has the virtue of being `unsafe` code, which is something I have not yet used\n          in Rust!\n\n* * *\n\n### 2019-01-06 Update\n\n- I wrote an iterator for commits! The [first cut][ref_6] kept a `Vec` of `(Id, Commit)` around,\n  so we could always pick the most recent \"next\" commit out of the graph (since commits may have\n  many parents.)\n    - But in finishing up the collections section of \"Programming Rust\" I noticed that `BinaryHeap`\n      was available, which keeps entries in sorted order. You don't often get to choose the underlying\n      storage mechanism of your collections in JS, so this hadn't occurred to me!\n    - Anyway. I swapped out the `Vec` for a `BinaryHeap` [in this commit][ref_7]. Because this pushes\n      the ordering into an `Ord` impl for a type, this opens up the possibility of using the one iterator\n      definition for multiple different orderings. Neat!\n- Testing against a couple long-lived repo, the results coming out of `git_rs` are exactly the same as\n  `git`!\n    - However, it takes about **twice** the time: **60ms** for `git_rs` where `git` takes **30ms**.\n    - I think I have a lead on this, and it has to do with packfile stores: each read from a packfile\n      opens a new `File` instance.\n- I've added a **TODO** section to keep track of what comes next!\n\n* * *\n\n### 2019-01-02 Update\n\n- I implemented [ref loading][ref_2]. It was a bit of a pain! Translating to and\n  from `Path` types took a bit of doing.\n- I've been trying to read up on Rust idioms -- I found a couple of resources:\n    - [The Rust API Guidelines][ref_3] doc has been _very_ helpful.\n    - **@mre**'s [idiomatic rust repo][ref_4] collects many interesting links.\n    - I've also been idly checking out [videos from RustConf 2018][ref_5]\n- As a result, I've implemented `FromStr` for `Id`, (hopefully) giving it a\n  more idiomatic API -- `let id: Id = str.parse()?`\n\n* * *\n\n### 2018-12-27 Update\n\n- Rust is feeling more natural. [This chain][ref_0] felt natural to write. I\n  was even able to [cross-index a list][ref_1] with only a minimum of fighting\n  the borrow checker.\n- I split the objects interface into Type + boxed read with a method for reifying\n  the data into an Object. This feels good! It lets folks check to see, for example,\n  if they're referring to a Blob without having to load the entire Blob into memory.\n- The closure interface for the loose interface works pretty well, but pack interfaces\n  need to be able to ask the overarching set of stores for a reference due to REF_DELTA\n  objects. This is a bummer, because it really quickly turns into \"fighting the borrow\n  checker.\" Right now I think the way forward is to build a StorageSet that holds a Vec\n  of heterogenous `Box\u003cStorage\u003e` objects, where `Storage` is a new trait that specifies\n  `get(\u0026Id, \u0026StorageSet)`.\n    - A sidenote re: the loose store: it feels kind of odd to have to produce a\n      `git_rs::Error` instead of a `std::io::Error`. Room for improvement!\n- Oh! It was pretty easy to add a binary to this lib crate. And now we can `git log`\n  other repos!\n\n* * *\n\n### 2018-12-21 Update\n\n- Decided to focus on moving a bit slower and making sure I have tests for\n  primitives this time around.\n- Moved away from my original `Box\u003cWrite\u003e` trait object design for object\n  instance reading \u0026 storage format in favor of generics.\n\n* * *\n\n[ref_0]: https://github.com/chrisdickinson/git-rs/blob/fdbe4ac7c781a5c085777baafbd15655be2eca0b/src/objects/commit.rs#L20-L30\n[ref_1]: https://github.com/chrisdickinson/git-rs/blob/fdbe4ac7c781a5c085777baafbd15655be2eca0b/src/packindex.rs#L116-L126\n[ref_2]: https://github.com/chrisdickinson/git-rs/commit/6157317fb18acac0633c624e9831282a950b4db0\n[ref_3]: https://rust-lang-nursery.github.io/api-guidelines/\n[ref_4]: https://github.com/mre/idiomatic-rust\n[ref_5]: https://www.youtube.com/playlist?list=PL85XCvVPmGQi3tivxDDF1hrT9qr5hdMBZ\n[ref_6]: https://github.com/chrisdickinson/git-rs/blob/254d97e3d840eded4e5ff5a06b9414ff9396e976/src/walk/commits.rs#L56-L71\n[ref_7]: https://github.com/chrisdickinson/git-rs/commit/f8f4cf5f1430b14d3ef0b298ffa9f2cd880d5c28/src/walk/commits.rs#L40\n[ref_8]: https://github.com/chrisdickinson/git-rs/issues/new?title=Here%27s%20how%20to%20remove%20the%20clone()%20from%20walk::tree\n[ref_9]: https://github.com/rust-lang/rust-clippy\n[ref_10]: https://github.com/chrisdickinson/git-rs/blob/12afab5462f67c8670671177b4053aa566b45338/src/delta.rs#L45-L47\n[ref_11]: https://github.com/chrisdickinson/git-odb-pack\n[ref_12]: https://github.com/chrisdickinson/git-rs/blob/eff4c6de1bd15cb0d28581e4da17035dbf9e8596/src/stores/mmap_pack.rs#L41\n[ref_13]: https://developers.google.com/protocol-buffers/docs/encoding#varints\n[ref_14]: https://github.com/chrisdickinson/git-rs/blob/eff4c6de1bd15cb0d28581e4da17035dbf9e8596/src/delta.rs#L133-L140\n[ref_15]: https://66.media.tumblr.com/cd7765c4bfbe7d124ad2b7bf344b9588/tumblr_p902faD2n61wzvt9qo1_500.gif\n[ref_16]: https://github.com/chrisdickinson/git-rs/blob/eff4c6de1bd15cb0d28581e4da17035dbf9e8596/src/delta.rs#L154-L161\n[ref_17]: https://github.com/chrisdickinson/git-rs/blob/12afab5462f67c8670671177b4053aa566b45338/src/delta.rs#L93\n[ref_18]: https://github.com/chrisdickinson/git-rs/blob/12afab5462f67c8670671177b4053aa566b45338/src/delta.rs#L187\n[ref_19]: https://github.com/chrisdickinson/git-rs/commit/1442539cd01b7140ff2f58bc5df39e4686bd6843\n[ref_20]: https://thumbs.gfycat.com/PiercingSnarlingArabianhorse-size_restricted.gif\n","funding_links":[],"categories":["Rust"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fchrisdickinson%2Fgit-rs","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fchrisdickinson%2Fgit-rs","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fchrisdickinson%2Fgit-rs/lists"}