Model: GPT-2 Xl: 48 Layers, 6400 Neurons per Layer
Dataset: Open Web Text
Neuron 518 in Layer 27
Transformer Lens Loading: HookedTransformer.from_pretrained('gpt2-xl')
Text #0
Max Range: 5.7256. Min Range: -5.7256
Max Act: 5.7256. Min Act: -0.1700
Data Index: 3329321 (Open Web Text)
Max Activating Token Index: 577
Click toggle to see full text
Truncated
Full Text #0
Text #1
Max Range: 5.7256. Min Range: -5.7256
Max Act: 5.5381. Min Act: -0.1700
Data Index: 3860028 (Open Web Text)
Max Activating Token Index: 681
Click toggle to see full text
Truncated
Full Text #1
Text #2
Max Range: 5.7256. Min Range: -5.7256
Max Act: 5.4756. Min Act: -0.1700
Data Index: 4579373 (Open Web Text)
Max Activating Token Index: 616
Click toggle to see full text
Truncated
Full Text #2
Text #3
Max Range: 5.7256. Min Range: -5.7256
Max Act: 5.4443. Min Act: -0.1700
Data Index: 408632 (Open Web Text)
Max Activating Token Index: 273
Click toggle to see full text
Truncated
Full Text #3
Text #4
Max Range: 5.7256. Min Range: -5.7256
Max Act: 5.4443. Min Act: -0.1700
Data Index: 5059691 (Open Web Text)
Max Activating Token Index: 651
Click toggle to see full text
Truncated
Full Text #4
Text #5
Max Range: 5.7256. Min Range: -5.7256
Max Act: 5.4131. Min Act: -0.1700
Data Index: 3026599 (Open Web Text)
Max Activating Token Index: 550
Click toggle to see full text
Truncated
Full Text #5
Text #6
Max Range: 5.7256. Min Range: -5.7256
Max Act: 5.4131. Min Act: -0.1700
Data Index: 8276794 (Open Web Text)
Max Activating Token Index: 670
Click toggle to see full text
Truncated
Full Text #6
Text #7
Max Range: 5.7256. Min Range: -5.7256
Max Act: 5.3818. Min Act: -0.1700
Data Index: 7681736 (Open Web Text)
Max Activating Token Index: 451
Click toggle to see full text
Truncated
Full Text #7
Text #8
Max Range: 5.7256. Min Range: -5.7256
Max Act: 5.4131. Min Act: -0.1700
Data Index: 8656554 (Open Web Text)
Max Activating Token Index: 509
Click toggle to see full text
Truncated
Full Text #8
Text #9
Max Range: 5.7256. Min Range: -5.7256
Max Act: 5.3818. Min Act: -0.1700
Data Index: 974812 (Open Web Text)
Max Activating Token Index: 545
Click toggle to see full text
Truncated
Full Text #9
Text #10
Max Range: 5.7256. Min Range: -5.7256
Max Act: 5.3506. Min Act: -0.1700
Data Index: 3871520 (Open Web Text)
Max Activating Token Index: 281
Click toggle to see full text
Truncated
Full Text #10
Text #11
Max Range: 5.7256. Min Range: -5.7256
Max Act: 5.3506. Min Act: -0.1700
Data Index: 3536459 (Open Web Text)
Max Activating Token Index: 294
Click toggle to see full text
Truncated
Full Text #11
Text #12
Max Range: 5.7256. Min Range: -5.7256
Max Act: 5.3193. Min Act: -0.1700
Data Index: 5815715 (Open Web Text)
Max Activating Token Index: 589
Click toggle to see full text
Truncated
Full Text #12
Text #13
Max Range: 5.7256. Min Range: -5.7256
Max Act: 5.3193. Min Act: -0.1700
Data Index: 2396542 (Open Web Text)
Max Activating Token Index: 444
Click toggle to see full text
Truncated
Full Text #13
Text #14
Max Range: 5.7256. Min Range: -5.7256
Max Act: 5.2881. Min Act: -0.1700
Data Index: 899153 (Open Web Text)
Max Activating Token Index: 281
Click toggle to see full text
Truncated
Full Text #14
Text #15
Max Range: 5.7256. Min Range: -5.7256
Max Act: 5.2881. Min Act: -0.1700
Data Index: 136993 (Open Web Text)
Max Activating Token Index: 640
Click toggle to see full text
Truncated
Full Text #15
Text #16
Max Range: 5.7256. Min Range: -5.7256
Max Act: 5.2881. Min Act: -0.1700
Data Index: 8475996 (Open Web Text)
Max Activating Token Index: 72
Click toggle to see full text
Truncated
Full Text #16
Text #17
Max Range: 5.7256. Min Range: -5.7256
Max Act: 5.2568. Min Act: -0.1700
Data Index: 458316 (Open Web Text)
Max Activating Token Index: 729
Click toggle to see full text
Truncated
Full Text #17
Text #18
Max Range: 5.7256. Min Range: -5.7256
Max Act: 5.2568. Min Act: -0.1700
Data Index: 4415854 (Open Web Text)
Max Activating Token Index: 533
Click toggle to see full text
Truncated
Full Text #18
Text #19
Max Range: 5.7256. Min Range: -5.7256
Max Act: 5.2256. Min Act: -0.1700
Data Index: 1294797 (Open Web Text)
Max Activating Token Index: 621
Click toggle to see full text
Truncated
Full Text #19