{"id":29649837,"url":"https://github.com/contextlab/quantum-conversations","last_synced_at":"2025-07-22T04:35:38.105Z","repository":{"id":304999110,"uuid":"1021573337","full_name":"ContextLab/quantum-conversations","owner":"ContextLab","description":"Model sequences of \"all the thoughts you didn't have and all the things you didn't say\"","archived":false,"fork":false,"pushed_at":"2025-07-17T16:01:36.000Z","size":383,"stargazers_count":0,"open_issues_count":0,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2025-07-17T18:47:27.358Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/ContextLab.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2025-07-17T15:49:39.000Z","updated_at":"2025-07-17T16:01:40.000Z","dependencies_parsed_at":"2025-07-17T21:51:11.893Z","dependency_job_id":"205655a1-779a-43f7-85f7-f5e6a86cca33","html_url":"https://github.com/ContextLab/quantum-conversations","commit_stats":null,"previous_names":["contextlab/quantum-conversations"],"tags_count":null,"template":false,"template_full_name":"ContextLab/latex-base","purl":"pkg:github/ContextLab/quantum-conversations","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ContextLab%2Fquantum-conversations","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ContextLab%2Fquantum-conversations/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ContextLab%2Fquantum-conversations/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ContextLab%2Fquantum-conversations/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/ContextLab","download_url":"https://codeload.github.com/ContextLab/quantum-conversations/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ContextLab%2Fquantum-conversations/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":266428574,"owners_count":23927023,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-07-22T02:00:09.085Z","response_time":66,"last_error":null,"robots_txt_status":null,"robots_txt_updated_at":null,"robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2025-07-22T04:35:35.912Z","updated_at":"2025-07-22T04:35:38.095Z","avatar_url":"https://github.com/ContextLab.png","language":"Jupyter Notebook","readme":"# Quantum Conversations\n\n## Overview\n\nDo the things we *don't* say (but perhaps that we *thought*) affect what we say (or think!) in the future?  Modern (standard) LLMs output sequences of tokens, one token at a time. However, in order to emit a single token at timestep $t$, a model carries out a selection by taking a draw over the $V$ possible tokens in the vocabulary. The \"chosen\" token, $x_t$, will tend to be one of the more probable tokens, but (particularly when the model temperature is high) it might not be the *most* probable token-- and occasionally the chosen token might even be a lower probability token.  Given that we are currently at timepoint $t$, our core question is: do humans \"keep around\" some representation of the history of \"what *could* have been outputted\" rather than solely storing the sequence of previously outputted tokens?\n\n## Approach\n\nGiven a model, $M$, and a sequence of tokens, $x_1, x_2, ..., x_t$, we want to examine the probability of outputting each possible token (there are $V$ of them) at time $t+1$.  We can then store the full \"history\" of outputted token probabilities as a $V \\times t$ matrix.  In principle, we could consider the full set of branching paths that could have been taken.  However, for a sequence of $t$ tokens, this would require storing $V^t$ possible paths.  This is intractable, even for relatively short sequences ($V$ is on the order of 100,000, and $t$ is on the order of thousands).  Here we approximate the set of possible paths using particle filters.  Then for $n$ particles, we need to store a $V \\times t \\times n$ tensor.\n\nWe can then ask: given an observed sequence of tokens from a human conversation or narrative, can we better explain the token-by-token probabilities using that full tensor (e.g., by accounting for tokens *not* emitted), or is \"all\" of the predictive power carried solely by the single observed sequence?\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcontextlab%2Fquantum-conversations","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fcontextlab%2Fquantum-conversations","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcontextlab%2Fquantum-conversations/lists"}