Model: GPT-2 Large: 36 Layers, 5120 Neurons per Layer
Dataset: Open Web Text
Neuron 5114 in Layer 24 
Transformer Lens Loading: HookedTransformer.from_pretrained('gpt2-large')
Text #0
Max Range: 7.9390. Min Range: -7.9390
Max Act: 7.9390. Min Act: -0.1700
Data Index: 1100176 (Open Web Text)
Max Activating Token Index: 701
Click toggle to see full text
Truncated
    
Full Text #0
    
 
Text #1
Max Range: 7.9390. Min Range: -7.9390
Max Act: 7.6890. Min Act: -0.1700
Data Index: 3703860 (Open Web Text)
Max Activating Token Index: 504
Click toggle to see full text
Truncated
    
Full Text #1
    
 
Text #2
Max Range: 7.9390. Min Range: -7.9390
Max Act: 7.6577. Min Act: -0.1700
Data Index: 2387625 (Open Web Text)
Max Activating Token Index: 446
Click toggle to see full text
Truncated
    
Full Text #2
    
 
Text #3
Max Range: 7.9390. Min Range: -7.9390
Max Act: 7.5952. Min Act: -0.1700
Data Index: 5745384 (Open Web Text)
Max Activating Token Index: 498
Click toggle to see full text
Truncated
    
Full Text #3
    
 
Text #4
Max Range: 7.9390. Min Range: -7.9390
Max Act: 7.5952. Min Act: -0.1700
Data Index: 2346732 (Open Web Text)
Max Activating Token Index: 390
Click toggle to see full text
Truncated
    
Full Text #4
    
 
Text #5
Max Range: 7.9390. Min Range: -7.9390
Max Act: 7.5640. Min Act: -0.1700
Data Index: 7965995 (Open Web Text)
Max Activating Token Index: 882
Click toggle to see full text
Truncated
    
Full Text #5
    
 
Text #6
Max Range: 7.9390. Min Range: -7.9390
Max Act: 7.5952. Min Act: -0.1700
Data Index: 8583316 (Open Web Text)
Max Activating Token Index: 501
Click toggle to see full text
Truncated
    
Full Text #6
    
 
Text #7
Max Range: 7.9390. Min Range: -7.9390
Max Act: 7.5640. Min Act: -0.1700
Data Index: 3108487 (Open Web Text)
Max Activating Token Index: 884
Click toggle to see full text
Truncated
    
Full Text #7
    
 
Text #8
Max Range: 7.9390. Min Range: -7.9390
Max Act: 7.5640. Min Act: -0.1700
Data Index: 783832 (Open Web Text)
Max Activating Token Index: 486
Click toggle to see full text
Truncated
    
Full Text #8
    
 
Text #9
Max Range: 7.9390. Min Range: -7.9390
Max Act: 7.5327. Min Act: -0.1700
Data Index: 8416844 (Open Web Text)
Max Activating Token Index: 104
Click toggle to see full text
Truncated
    
Full Text #9
    
 
Text #10
Max Range: 7.9390. Min Range: -7.9390
Max Act: 7.4702. Min Act: -0.1700
Data Index: 1720345 (Open Web Text)
Max Activating Token Index: 509
Click toggle to see full text
Truncated
    
Full Text #10
    
 
Text #11
Max Range: 7.9390. Min Range: -7.9390
Max Act: 7.4702. Min Act: -0.1700
Data Index: 7972495 (Open Web Text)
Max Activating Token Index: 1010
Click toggle to see full text
Truncated
    
Full Text #11
    
 
Text #12
Max Range: 7.9390. Min Range: -7.9390
Max Act: 7.4702. Min Act: -0.1700
Data Index: 3477290 (Open Web Text)
Max Activating Token Index: 325
Click toggle to see full text
Truncated
    
Full Text #12
    
 
Text #13
Max Range: 7.9390. Min Range: -7.9390
Max Act: 7.4390. Min Act: -0.1700
Data Index: 2716546 (Open Web Text)
Max Activating Token Index: 553
Click toggle to see full text
Truncated
    
Full Text #13
    
 
Text #14
Max Range: 7.9390. Min Range: -7.9390
Max Act: 7.4390. Min Act: -0.1700
Data Index: 7153422 (Open Web Text)
Max Activating Token Index: 125
Click toggle to see full text
Truncated
    
Full Text #14
    
 
Text #15
Max Range: 7.9390. Min Range: -7.9390
Max Act: 7.4390. Min Act: -0.1700
Data Index: 3584321 (Open Web Text)
Max Activating Token Index: 812
Click toggle to see full text
Truncated
    
Full Text #15
    
 
Text #16
Max Range: 7.9390. Min Range: -7.9390
Max Act: 7.4077. Min Act: -0.1700
Data Index: 6702428 (Open Web Text)
Max Activating Token Index: 695
Click toggle to see full text
Truncated
    
Full Text #16
    
 
Text #17
Max Range: 7.9390. Min Range: -7.9390
Max Act: 7.4390. Min Act: -0.1700
Data Index: 277084 (Open Web Text)
Max Activating Token Index: 445
Click toggle to see full text
Truncated
    
Full Text #17
    
 
Text #18
Max Range: 7.9390. Min Range: -7.9390
Max Act: 7.4077. Min Act: -0.1700
Data Index: 2115900 (Open Web Text)
Max Activating Token Index: 432
Click toggle to see full text
Truncated
    
Full Text #18
    
 
Text #19
Max Range: 7.9390. Min Range: -7.9390
Max Act: 7.3765. Min Act: -0.1700
Data Index: 3204811 (Open Web Text)
Max Activating Token Index: 357
Click toggle to see full text
Truncated
    
Full Text #19