Model: GPT-2 Large: 36 Layers, 5120 Neurons per Layer
Dataset: Open Web Text
Neuron 4673 in Layer 25
Transformer Lens Loading: HookedTransformer.from_pretrained('gpt2-large')
Text #0
Max Range: 5.5663. Min Range: -5.5663
Max Act: 5.5663. Min Act: -0.1700
Data Index: 3678553 (Open Web Text)
Max Activating Token Index: 759
Click toggle to see full text
Truncated
Full Text #0
Text #1
Max Range: 5.5663. Min Range: -5.5663
Max Act: 5.5038. Min Act: -0.1700
Data Index: 6737443 (Open Web Text)
Max Activating Token Index: 989
Click toggle to see full text
Truncated
Full Text #1
Text #2
Max Range: 5.5663. Min Range: -5.5663
Max Act: 5.4101. Min Act: -0.1700
Data Index: 7125589 (Open Web Text)
Max Activating Token Index: 263
Click toggle to see full text
Truncated
Full Text #2
Text #3
Max Range: 5.5663. Min Range: -5.5663
Max Act: 5.4101. Min Act: -0.1700
Data Index: 4396126 (Open Web Text)
Max Activating Token Index: 163
Click toggle to see full text
Truncated
Full Text #3
Text #4
Max Range: 5.5663. Min Range: -5.5663
Max Act: 5.3788. Min Act: -0.1700
Data Index: 2123380 (Open Web Text)
Max Activating Token Index: 553
Click toggle to see full text
Truncated
Full Text #4
Text #5
Max Range: 5.5663. Min Range: -5.5663
Max Act: 5.3788. Min Act: -0.1700
Data Index: 3549482 (Open Web Text)
Max Activating Token Index: 387
Click toggle to see full text
Truncated
Full Text #5
Text #6
Max Range: 5.5663. Min Range: -5.5663
Max Act: 5.3788. Min Act: -0.1700
Data Index: 8435804 (Open Web Text)
Max Activating Token Index: 665
Click toggle to see full text
Truncated
Full Text #6
Text #7
Max Range: 5.5663. Min Range: -5.5663
Max Act: 5.3476. Min Act: -0.1700
Data Index: 3523767 (Open Web Text)
Max Activating Token Index: 256
Click toggle to see full text
Truncated
Full Text #7
Text #8
Max Range: 5.5663. Min Range: -5.5663
Max Act: 5.3788. Min Act: -0.1700
Data Index: 4666575 (Open Web Text)
Max Activating Token Index: 359
Click toggle to see full text
Truncated
Full Text #8
Text #9
Max Range: 5.5663. Min Range: -5.5663
Max Act: 5.3163. Min Act: -0.1700
Data Index: 1446688 (Open Web Text)
Max Activating Token Index: 169
Click toggle to see full text
Truncated
Full Text #9
Text #10
Max Range: 5.5663. Min Range: -5.5663
Max Act: 5.3163. Min Act: -0.1700
Data Index: 5026414 (Open Web Text)
Max Activating Token Index: 509
Click toggle to see full text
Truncated
Full Text #10
Text #11
Max Range: 5.5663. Min Range: -5.5663
Max Act: 5.3163. Min Act: -0.1700
Data Index: 7989559 (Open Web Text)
Max Activating Token Index: 197
Click toggle to see full text
Truncated
Full Text #11
Text #12
Max Range: 5.5663. Min Range: -5.5663
Max Act: 5.3163. Min Act: -0.1700
Data Index: 7577394 (Open Web Text)
Max Activating Token Index: 300
Click toggle to see full text
Truncated
Full Text #12
Text #13
Max Range: 5.5663. Min Range: -5.5663
Max Act: 5.3163. Min Act: -0.1700
Data Index: 7392071 (Open Web Text)
Max Activating Token Index: 389
Click toggle to see full text
Truncated
Full Text #13
Text #14
Max Range: 5.5663. Min Range: -5.5663
Max Act: 5.2851. Min Act: -0.1700
Data Index: 5165779 (Open Web Text)
Max Activating Token Index: 250
Click toggle to see full text
Truncated
Full Text #14
Text #15
Max Range: 5.5663. Min Range: -5.5663
Max Act: 5.2851. Min Act: -0.1700
Data Index: 2887298 (Open Web Text)
Max Activating Token Index: 941
Click toggle to see full text
Truncated
Full Text #15
Text #16
Max Range: 5.5663. Min Range: -5.5663
Max Act: 5.2538. Min Act: -0.1700
Data Index: 8047762 (Open Web Text)
Max Activating Token Index: 717
Click toggle to see full text
Truncated
Full Text #16
Text #17
Max Range: 5.5663. Min Range: -5.5663
Max Act: 5.2226. Min Act: -0.1700
Data Index: 1055609 (Open Web Text)
Max Activating Token Index: 262
Click toggle to see full text
Truncated
Full Text #17
Text #18
Max Range: 5.5663. Min Range: -5.5663
Max Act: 5.2226. Min Act: -0.1700
Data Index: 6701233 (Open Web Text)
Max Activating Token Index: 989
Click toggle to see full text
Truncated
Full Text #18
Text #19
Max Range: 5.5663. Min Range: -5.5663
Max Act: 5.1913. Min Act: -0.1700
Data Index: 6670383 (Open Web Text)
Max Activating Token Index: 163
Click toggle to see full text
Truncated
Full Text #19