Model: GPT-2 Small: 12 Layers, 3072 Neurons per Layer
Dataset: Open Web Text
Neuron 2651 in Layer 7
Transformer Lens Loading: HookedTransformer.from_pretrained('gpt2-small')
Text #0
Max Range: 4.3277. Min Range: -4.3277
Max Act: 4.3277. Min Act: -0.1700
Data Index: 4219919 (Open Web Text)
Max Activating Token Index: 421
Click toggle to see full textTruncated
Full Text #0
Text #1
Max Range: 4.3277. Min Range: -4.3277
Max Act: 4.1738. Min Act: -0.1700
Data Index: 5076628 (Open Web Text)
Max Activating Token Index: 1020
Click toggle to see full textTruncated
Full Text #1
Text #2
Max Range: 4.3277. Min Range: -4.3277
Max Act: 4.1335. Min Act: -0.1700
Data Index: 3920344 (Open Web Text)
Max Activating Token Index: 732
Click toggle to see full textTruncated
Full Text #2
Text #3
Max Range: 4.3277. Min Range: -4.3277
Max Act: 4.0791. Min Act: -0.1700
Data Index: 1954780 (Open Web Text)
Max Activating Token Index: 724
Click toggle to see full textTruncated
Full Text #3
Text #4
Max Range: 4.3277. Min Range: -4.3277
Max Act: 3.9592. Min Act: -0.1700
Data Index: 5743028 (Open Web Text)
Max Activating Token Index: 773
Click toggle to see full textTruncated
Full Text #4
Text #5
Max Range: 4.3277. Min Range: -4.3277
Max Act: 3.9141. Min Act: -0.1700
Data Index: 7626863 (Open Web Text)
Max Activating Token Index: 935
Click toggle to see full textTruncated
Full Text #5
Text #6
Max Range: 4.3277. Min Range: -4.3277
Max Act: 3.9390. Min Act: -0.1700
Data Index: 7657988 (Open Web Text)
Max Activating Token Index: 697
Click toggle to see full textTruncated
Full Text #6
Text #7
Max Range: 4.3277. Min Range: -4.3277
Max Act: 3.9212. Min Act: -0.1700
Data Index: 6114222 (Open Web Text)
Max Activating Token Index: 859
Click toggle to see full textTruncated
Full Text #7
Text #8
Max Range: 4.3277. Min Range: -4.3277
Max Act: 3.9488. Min Act: -0.1700
Data Index: 8565175 (Open Web Text)
Max Activating Token Index: 987
Click toggle to see full textTruncated
Full Text #8
Text #9
Max Range: 4.3277. Min Range: -4.3277
Max Act: 3.9192. Min Act: -0.1700
Data Index: 3624595 (Open Web Text)
Max Activating Token Index: 954
Click toggle to see full textTruncated
Full Text #9
Text #10
Max Range: 4.3277. Min Range: -4.3277
Max Act: 3.9421. Min Act: -0.1700
Data Index: 3269015 (Open Web Text)
Max Activating Token Index: 1004
Click toggle to see full textTruncated
Full Text #10
Text #11
Max Range: 4.3277. Min Range: -4.3277
Max Act: 3.9068. Min Act: -0.1700
Data Index: 3174118 (Open Web Text)
Max Activating Token Index: 966
Click toggle to see full textTruncated
Full Text #11
Text #12
Max Range: 4.3277. Min Range: -4.3277
Max Act: 3.8840. Min Act: -0.1700
Data Index: 3527154 (Open Web Text)
Max Activating Token Index: 648
Click toggle to see full textTruncated
Full Text #12
Text #13
Max Range: 4.3277. Min Range: -4.3277
Max Act: 3.8798. Min Act: -0.1700
Data Index: 4024784 (Open Web Text)
Max Activating Token Index: 991
Click toggle to see full textTruncated
Full Text #13
Text #14
Max Range: 4.3277. Min Range: -4.3277
Max Act: 3.8673. Min Act: -0.1700
Data Index: 5730634 (Open Web Text)
Max Activating Token Index: 971
Click toggle to see full textTruncated
Full Text #14
Text #15
Max Range: 4.3277. Min Range: -4.3277
Max Act: 3.8654. Min Act: -0.1700
Data Index: 8109888 (Open Web Text)
Max Activating Token Index: 976
Click toggle to see full textTruncated
Full Text #15
Text #16
Max Range: 4.3277. Min Range: -4.3277
Max Act: 3.8810. Min Act: -0.1700
Data Index: 2264310 (Open Web Text)
Max Activating Token Index: 923
Click toggle to see full textTruncated
Full Text #16
Text #17
Max Range: 4.3277. Min Range: -4.3277
Max Act: 3.8450. Min Act: -0.1700
Data Index: 7495217 (Open Web Text)
Max Activating Token Index: 697
Click toggle to see full textTruncated
Full Text #17
Text #18
Max Range: 4.3277. Min Range: -4.3277
Max Act: 3.8314. Min Act: -0.1700
Data Index: 1735739 (Open Web Text)
Max Activating Token Index: 857
Click toggle to see full textTruncated
Full Text #18
Text #19
Max Range: 4.3277. Min Range: -4.3277
Max Act: 3.8481. Min Act: -0.1700
Data Index: 2691229 (Open Web Text)
Max Activating Token Index: 928
Click toggle to see full textTruncated
Full Text #19