Model: GPT-2 Small: 12 Layers, 3072 Neurons per Layer
Dataset: Open Web Text
Neuron 1515 in Layer 3
Hooked Transformer Loading: HookedTransformer.from_pretrained('gpt2-small')
Click the toggle to see the full text!
Text #0
Max Range: 2.2248. Min Range: -2.2248
Max Act: 2.2248. Min Act: -0.1700
Data Index: 6012271 (Open Web Text)
Max Activating Token Index: 642
Click toggle to see full textTruncated
Full Text #0
Text #1
Max Range: 2.2248. Min Range: -2.2248
Max Act: 2.1632. Min Act: -0.1700
Data Index: 1635564 (Open Web Text)
Max Activating Token Index: 714
Click toggle to see full textTruncated
Full Text #1
Text #2
Max Range: 2.2248. Min Range: -2.2248
Max Act: 2.1855. Min Act: -0.1700
Data Index: 4651262 (Open Web Text)
Max Activating Token Index: 536
Click toggle to see full textTruncated
Full Text #2
Text #3
Max Range: 2.2248. Min Range: -2.2248
Max Act: 2.1774. Min Act: -0.1700
Data Index: 7093066 (Open Web Text)
Max Activating Token Index: 435
Click toggle to see full textTruncated
Full Text #3
Text #4
Max Range: 2.2248. Min Range: -2.2248
Max Act: 2.1673. Min Act: -0.1700
Data Index: 7825796 (Open Web Text)
Max Activating Token Index: 260
Click toggle to see full textTruncated
Full Text #4
Text #5
Max Range: 2.2248. Min Range: -2.2248
Max Act: 2.1520. Min Act: -0.1700
Data Index: 1428563 (Open Web Text)
Max Activating Token Index: 393
Click toggle to see full textTruncated
Full Text #5
Text #6
Max Range: 2.2248. Min Range: -2.2248
Max Act: 2.1331. Min Act: -0.1700
Data Index: 6996846 (Open Web Text)
Max Activating Token Index: 458
Click toggle to see full textTruncated
Full Text #6
Text #7
Max Range: 2.2248. Min Range: -2.2248
Max Act: 2.1502. Min Act: -0.1700
Data Index: 5609916 (Open Web Text)
Max Activating Token Index: 438
Click toggle to see full textTruncated
Full Text #7
Text #8
Max Range: 2.2248. Min Range: -2.2248
Max Act: 2.1364. Min Act: -0.1700
Data Index: 2025070 (Open Web Text)
Max Activating Token Index: 690
Click toggle to see full textTruncated
Full Text #8
Text #9
Max Range: 2.2248. Min Range: -2.2248
Max Act: 2.1215. Min Act: -0.1700
Data Index: 639498 (Open Web Text)
Max Activating Token Index: 966
Click toggle to see full textTruncated
Full Text #9
Text #10
Max Range: 2.2248. Min Range: -2.2248
Max Act: 2.1225. Min Act: -0.1700
Data Index: 163720 (Open Web Text)
Max Activating Token Index: 331
Click toggle to see full textTruncated
Full Text #10
Text #11
Max Range: 2.2248. Min Range: -2.2248
Max Act: 2.1177. Min Act: -0.1700
Data Index: 3390519 (Open Web Text)
Max Activating Token Index: 517
Click toggle to see full textTruncated
Full Text #11
Text #12
Max Range: 2.2248. Min Range: -2.2248
Max Act: 2.1218. Min Act: -0.1700
Data Index: 2579480 (Open Web Text)
Max Activating Token Index: 76
Click toggle to see full textTruncated
Full Text #12
Text #13
Max Range: 2.2248. Min Range: -2.2248
Max Act: 2.1308. Min Act: -0.1700
Data Index: 3291938 (Open Web Text)
Max Activating Token Index: 668
Click toggle to see full textTruncated
Full Text #13
Text #14
Max Range: 2.2248. Min Range: -2.2248
Max Act: 2.1204. Min Act: -0.1700
Data Index: 6434055 (Open Web Text)
Max Activating Token Index: 112
Click toggle to see full textTruncated
Full Text #14
Text #15
Max Range: 2.2248. Min Range: -2.2248
Max Act: 2.1251. Min Act: -0.1700
Data Index: 4892890 (Open Web Text)
Max Activating Token Index: 616
Click toggle to see full textTruncated
Full Text #15
Text #16
Max Range: 2.2248. Min Range: -2.2248
Max Act: 2.0924. Min Act: -0.1700
Data Index: 5802083 (Open Web Text)
Max Activating Token Index: 904
Click toggle to see full textTruncated
Full Text #16
Text #17
Max Range: 2.2248. Min Range: -2.2248
Max Act: 2.1020. Min Act: -0.1700
Data Index: 266938 (Open Web Text)
Max Activating Token Index: 399
Click toggle to see full textTruncated
Full Text #17
Text #18
Max Range: 2.2248. Min Range: -2.2248
Max Act: 2.0858. Min Act: -0.1700
Data Index: 6552099 (Open Web Text)
Max Activating Token Index: 1022
Click toggle to see full textTruncated
Full Text #18
Text #19
Max Range: 2.2248. Min Range: -2.2248
Max Act: 2.1106. Min Act: -0.1700
Data Index: 2390119 (Open Web Text)
Max Activating Token Index: 216
Click toggle to see full textTruncated
Full Text #19