Model: GPT-2 Large: 36 Layers, 5120 Neurons per Layer
Dataset: Open Web Text
Neuron 13 in Layer 11 
Transformer Lens Loading: HookedTransformer.from_pretrained('gpt2-large')
Text #0
Max Range: 4.6130. Min Range: -4.6130
Max Act: 4.6130. Min Act: -0.1700
Data Index: 3216068 (Open Web Text)
Max Activating Token Index: 284
Click toggle to see full text
Truncated
    
Full Text #0
    
 
Text #1
Max Range: 4.6130. Min Range: -4.6130
Max Act: 4.6130. Min Act: -0.1700
Data Index: 6818110 (Open Web Text)
Max Activating Token Index: 877
Click toggle to see full text
Truncated
    
Full Text #1
    
 
Text #2
Max Range: 4.6130. Min Range: -4.6130
Max Act: 4.5193. Min Act: -0.1700
Data Index: 5271912 (Open Web Text)
Max Activating Token Index: 1011
Click toggle to see full text
Truncated
    
Full Text #2
    
 
Text #3
Max Range: 4.6130. Min Range: -4.6130
Max Act: 4.5193. Min Act: -0.1700
Data Index: 7746802 (Open Web Text)
Max Activating Token Index: 831
Click toggle to see full text
Truncated
    
Full Text #3
    
 
Text #4
Max Range: 4.6130. Min Range: -4.6130
Max Act: 4.4880. Min Act: -0.1700
Data Index: 4119836 (Open Web Text)
Max Activating Token Index: 808
Click toggle to see full text
Truncated
    
Full Text #4
    
 
Text #5
Max Range: 4.6130. Min Range: -4.6130
Max Act: 4.4880. Min Act: -0.1700
Data Index: 8092755 (Open Web Text)
Max Activating Token Index: 214
Click toggle to see full text
Truncated
    
Full Text #5
    
 
Text #6
Max Range: 4.6130. Min Range: -4.6130
Max Act: 4.4880. Min Act: -0.1700
Data Index: 1804788 (Open Web Text)
Max Activating Token Index: 668
Click toggle to see full text
Truncated
    
Full Text #6
    
 
Text #7
Max Range: 4.6130. Min Range: -4.6130
Max Act: 4.4880. Min Act: -0.1700
Data Index: 1071792 (Open Web Text)
Max Activating Token Index: 907
Click toggle to see full text
Truncated
    
Full Text #7
    
 
Text #8
Max Range: 4.6130. Min Range: -4.6130
Max Act: 4.4568. Min Act: -0.1700
Data Index: 5524214 (Open Web Text)
Max Activating Token Index: 261
Click toggle to see full text
Truncated
    
Full Text #8
    
 
Text #9
Max Range: 4.6130. Min Range: -4.6130
Max Act: 4.4568. Min Act: -0.1700
Data Index: 3318216 (Open Web Text)
Max Activating Token Index: 938
Click toggle to see full text
Truncated
    
Full Text #9
    
 
Text #10
Max Range: 4.6130. Min Range: -4.6130
Max Act: 4.4255. Min Act: -0.1700
Data Index: 6780245 (Open Web Text)
Max Activating Token Index: 505
Click toggle to see full text
Truncated
    
Full Text #10
    
 
Text #11
Max Range: 4.6130. Min Range: -4.6130
Max Act: 4.4880. Min Act: -0.1700
Data Index: 6222170 (Open Web Text)
Max Activating Token Index: 368
Click toggle to see full text
Truncated
    
Full Text #11
    
 
Text #12
Max Range: 4.6130. Min Range: -4.6130
Max Act: 4.4255. Min Act: -0.1700
Data Index: 3867406 (Open Web Text)
Max Activating Token Index: 471
Click toggle to see full text
Truncated
    
Full Text #12
    
 
Text #13
Max Range: 4.6130. Min Range: -4.6130
Max Act: 4.4255. Min Act: -0.1700
Data Index: 1078848 (Open Web Text)
Max Activating Token Index: 436
Click toggle to see full text
Truncated
    
Full Text #13
    
 
Text #14
Max Range: 4.6130. Min Range: -4.6130
Max Act: 4.4255. Min Act: -0.1700
Data Index: 8752141 (Open Web Text)
Max Activating Token Index: 219
Click toggle to see full text
Truncated
    
Full Text #14
    
 
Text #15
Max Range: 4.6130. Min Range: -4.6130
Max Act: 4.4255. Min Act: -0.1700
Data Index: 5905800 (Open Web Text)
Max Activating Token Index: 820
Click toggle to see full text
Truncated
    
Full Text #15
    
 
Text #16
Max Range: 4.6130. Min Range: -4.6130
Max Act: 4.4255. Min Act: -0.1700
Data Index: 3601656 (Open Web Text)
Max Activating Token Index: 456
Click toggle to see full text
Truncated
    
Full Text #16
    
 
Text #17
Max Range: 4.6130. Min Range: -4.6130
Max Act: 4.3943. Min Act: -0.1700
Data Index: 5321159 (Open Web Text)
Max Activating Token Index: 776
Click toggle to see full text
Truncated
    
Full Text #17
    
 
Text #18
Max Range: 4.6130. Min Range: -4.6130
Max Act: 4.3630. Min Act: -0.1700
Data Index: 3505564 (Open Web Text)
Max Activating Token Index: 176
Click toggle to see full text
Truncated
    
Full Text #18
    
 
Text #19
Max Range: 4.6130. Min Range: -4.6130
Max Act: 4.4255. Min Act: -0.1700
Data Index: 3038655 (Open Web Text)
Max Activating Token Index: 362
Click toggle to see full text
Truncated
    
Full Text #19