Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/zer0int/golden-gate-clip
Like Golden Gate Claude, but with a CLIP Vision Transformer ~ feature activation manipulation fun!
https://github.com/zer0int/golden-gate-clip
Last synced: 6 days ago
JSON representation
Like Golden Gate Claude, but with a CLIP Vision Transformer ~ feature activation manipulation fun!
- Host: GitHub
- URL: https://github.com/zer0int/golden-gate-clip
- Owner: zer0int
- Created: 2024-06-09T15:45:03.000Z (5 months ago)
- Default Branch: CLIP-vision
- Last Pushed: 2024-06-21T07:26:37.000Z (5 months ago)
- Last Synced: 2024-06-22T00:42:57.172Z (5 months ago)
- Language: Python
- Size: 1.62 MB
- Stars: 0
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
## ⭐ 🌉 Golden Gate CLIP ✨🤖 🥳 🌉
----
### Changes 21/June/24
- Add i-clip-golden-gate-getter-TEXT-all-c_fc-from-wordlist.py
- Obtains top (by activation value) text transformer features for wordlist
- Example alltexts.txt contains "strange" CLIP self-predicted words describing Gaussian noise in images
----
Inspired by [Anthrophic's Golden Gate Claude](https://www.anthropic.com/news/golden-gate-claude), this repo obtains CLIP Vision Transformer feature activations for a set of images, compares which feature indices are present in all images, and then manipulates the activation value for that neuron in the CLIP Vision Transformer MLP c_fc (Fully Connected Layer).Result: CLIP predicts "San Francisco, Bay Area, sf, sfo" even for an image that it otherwise (rightfully!) describes as being noise.
![CLIP-golden-gate](https://github.com/zer0int/Golden-Gate-CLIP/assets/132047210/bd18afa8-e220-4d9f-aa50-2d36a045d8b0)
- To reproduce: python run_clipga-goldengate-manipulate-neuron-activation.py --image_path noise/ggnoise.png
- ...And run the files starting with "i-" as-is to get the activation values I used in above code.To make your own:
- Edit the folder name in the two files starting with "i-" to be e.g. photos of your cat.
- Resulting text files show you which feature numbers (indices) are present in all images.
- This helps to distinguish between vases, sofas, and other things not always around your cat from your actual cat.
- If none, the "cat" might not be salient in some images. Check the "-all" files manually**!
-
- ** I intend to make this easier in the future, by displaying outliers so you can remove them.
- For now, .json / .csv can be opened with a text editor and manually CTRL+F compared.
----
- For best results, manipulate Layer 21, 22 features (penultimate and 3rd layer near the output).
- Features <=~5 (input) encode simple lines and zigzags --> near output: Multimodal complex features.
- Check "run_clipga-", I explained how to manipulate activations in the #code #comments!🫤 Limitation: Needs at least somewhat similar images (e.g. Golden Gate Bridge <-> Any random bridge) to work when manipulating the activation value on a single layer. You'd likely have to coherently trick CLIP into the right activations over multiple layers to make it totally obsessed about the Golden Gate Bridge like GG Claude was. However, this toolkit is a great start!
⚠️ *Files starting with "x-" allow you to obtain absolutely every activation value in CLIP, including Multihead Attention, projection layer, individual patches (image tokens) + CLS. If you have a use for them, I am assuming you'll need no introduction on how to use them. Enjoy!
----
Original CLIP Gradient Ascent Script: Used with permission by Twitter / X: [@advadnoun](https://twitter.com/advadnoun)
- GG Bridge images: Via Google & Google Image Search, downsized to minimum resolution 336x336 pixels, limited to 5 (fair use).
----
~ CLIP's ADVERB neuron activation value + 1000, gazing at the GG Bridge ~![adverb-neuron-good](https://github.com/zer0int/Golden-Gate-CLIP/assets/132047210/61fc6fa9-08de-4bcd-bf15-2b4cbba07817)