Model: GPT-2 Small: 12 Layers, 3072 Neurons per Layer
Dataset: Open Web Text
Neuron 310 in Layer 1
Hooked Transformer Loading: HookedTransformer.from_pretrained('gpt2-small')
Click the toggle to see the full text!
Text #0
Max Range: 5.4369. Min Range: -5.4369
Max Act: 5.4369. Min Act: -0.1700
Data Index: 147250 (Open Web Text)
Max Activating Token Index: 1002
Click toggle to see full textTruncated
Full Text #0
Text #1
Max Range: 5.4369. Min Range: -5.4369
Max Act: 5.4093. Min Act: -0.1700
Data Index: 8438538 (Open Web Text)
Max Activating Token Index: 423
Click toggle to see full textTruncated
Full Text #1
Text #2
Max Range: 5.4369. Min Range: -5.4369
Max Act: 5.4057. Min Act: -0.1700
Data Index: 1476256 (Open Web Text)
Max Activating Token Index: 38
Click toggle to see full textTruncated
Full Text #2
Text #3
Max Range: 5.4369. Min Range: -5.4369
Max Act: 5.3763. Min Act: -0.1700
Data Index: 7984580 (Open Web Text)
Max Activating Token Index: 922
Click toggle to see full textTruncated
Full Text #3
Text #4
Max Range: 5.4369. Min Range: -5.4369
Max Act: 5.3463. Min Act: -0.1700
Data Index: 4948783 (Open Web Text)
Max Activating Token Index: 193
Click toggle to see full textTruncated
Full Text #4
Text #5
Max Range: 5.4369. Min Range: -5.4369
Max Act: 5.3388. Min Act: -0.1700
Data Index: 5094447 (Open Web Text)
Max Activating Token Index: 229
Click toggle to see full textTruncated
Full Text #5
Text #6
Max Range: 5.4369. Min Range: -5.4369
Max Act: 5.3493. Min Act: -0.1700
Data Index: 5289059 (Open Web Text)
Max Activating Token Index: 409
Click toggle to see full textTruncated
Full Text #6
Text #7
Max Range: 5.4369. Min Range: -5.4369
Max Act: 5.3630. Min Act: -0.1700
Data Index: 6577599 (Open Web Text)
Max Activating Token Index: 106
Click toggle to see full textTruncated
Full Text #7
Text #8
Max Range: 5.4369. Min Range: -5.4369
Max Act: 5.3409. Min Act: -0.1700
Data Index: 1051224 (Open Web Text)
Max Activating Token Index: 143
Click toggle to see full textTruncated
Full Text #8
Text #9
Max Range: 5.4369. Min Range: -5.4369
Max Act: 5.3273. Min Act: -0.1700
Data Index: 1844844 (Open Web Text)
Max Activating Token Index: 393
Click toggle to see full textTruncated
Full Text #9
Text #10
Max Range: 5.4369. Min Range: -5.4369
Max Act: 5.3390. Min Act: -0.1700
Data Index: 2457414 (Open Web Text)
Max Activating Token Index: 1010
Click toggle to see full textTruncated
Full Text #10
Text #11
Max Range: 5.4369. Min Range: -5.4369
Max Act: 5.3295. Min Act: -0.1700
Data Index: 4491992 (Open Web Text)
Max Activating Token Index: 284
Click toggle to see full textTruncated
Full Text #11
Text #12
Max Range: 5.4369. Min Range: -5.4369
Max Act: 5.3121. Min Act: -0.1700
Data Index: 5688162 (Open Web Text)
Max Activating Token Index: 326
Click toggle to see full textTruncated
Full Text #12
Text #13
Max Range: 5.4369. Min Range: -5.4369
Max Act: 5.3109. Min Act: -0.1700
Data Index: 5904119 (Open Web Text)
Max Activating Token Index: 476
Click toggle to see full textTruncated
Full Text #13
Text #14
Max Range: 5.4369. Min Range: -5.4369
Max Act: 5.3083. Min Act: -0.1700
Data Index: 2956981 (Open Web Text)
Max Activating Token Index: 1017
Click toggle to see full textTruncated
Full Text #14
Text #15
Max Range: 5.4369. Min Range: -5.4369
Max Act: 5.3214. Min Act: -0.1700
Data Index: 3177817 (Open Web Text)
Max Activating Token Index: 979
Click toggle to see full textTruncated
Full Text #15
Text #16
Max Range: 5.4369. Min Range: -5.4369
Max Act: 5.3361. Min Act: -0.1700
Data Index: 1716906 (Open Web Text)
Max Activating Token Index: 956
Click toggle to see full textTruncated
Full Text #16
Text #17
Max Range: 5.4369. Min Range: -5.4369
Max Act: 5.3188. Min Act: -0.1700
Data Index: 7757016 (Open Web Text)
Max Activating Token Index: 1015
Click toggle to see full textTruncated
Full Text #17
Text #18
Max Range: 5.4369. Min Range: -5.4369
Max Act: 5.3129. Min Act: -0.1700
Data Index: 3347719 (Open Web Text)
Max Activating Token Index: 125
Click toggle to see full textTruncated
Full Text #18
Text #19
Max Range: 5.4369. Min Range: -5.4369
Max Act: 5.3214. Min Act: -0.1700
Data Index: 1621259 (Open Web Text)
Max Activating Token Index: 1021
Click toggle to see full textTruncated
Full Text #19