Model: GPT-2 Medium: 24 Layers, 4096 Neurons per Layer
Dataset: Open Web Text
Neuron 6 in Layer 12 
Transformer Lens Loading: HookedTransformer.from_pretrained('gpt2-medium')
Text #0
Max Range: 3.1446. Min Range: -3.1446
Max Act: 3.1446. Min Act: -0.1700
Data Index: 3347572 (Open Web Text)
Max Activating Token Index: 117
Click toggle to see full text
Truncated
    
Full Text #0
    
Text #1
Max Range: 3.1446. Min Range: -3.1446
Max Act: 3.0816. Min Act: -0.1700
Data Index: 1378112 (Open Web Text)
Max Activating Token Index: 589
Click toggle to see full text
Truncated
    
Full Text #1
    
Text #2
Max Range: 3.1446. Min Range: -3.1446
Max Act: 3.0027. Min Act: -0.1700
Data Index: 6319222 (Open Web Text)
Max Activating Token Index: 93
Click toggle to see full text
Truncated
    
Full Text #2
    
Text #3
Max Range: 3.1446. Min Range: -3.1446
Max Act: 2.9711. Min Act: -0.1700
Data Index: 4794422 (Open Web Text)
Max Activating Token Index: 554
Click toggle to see full text
Truncated
    
Full Text #3
    
Text #4
Max Range: 3.1446. Min Range: -3.1446
Max Act: 2.9552. Min Act: -0.1700
Data Index: 4022584 (Open Web Text)
Max Activating Token Index: 83
Click toggle to see full text
Truncated
    
Full Text #4
    
Text #5
Max Range: 3.1446. Min Range: -3.1446
Max Act: 2.9236. Min Act: -0.1700
Data Index: 8569887 (Open Web Text)
Max Activating Token Index: 26
Click toggle to see full text
Truncated
    
Full Text #5
    
Text #6
Max Range: 3.1446. Min Range: -3.1446
Max Act: 2.9236. Min Act: -0.1700
Data Index: 1609529 (Open Web Text)
Max Activating Token Index: 438
Click toggle to see full text
Truncated
    
Full Text #6
    
Text #7
Max Range: 3.1446. Min Range: -3.1446
Max Act: 2.8760. Min Act: -0.1700
Data Index: 6848729 (Open Web Text)
Max Activating Token Index: 913
Click toggle to see full text
Truncated
    
Full Text #7
    
Text #8
Max Range: 3.1446. Min Range: -3.1446
Max Act: 2.9077. Min Act: -0.1700
Data Index: 8365143 (Open Web Text)
Max Activating Token Index: 930
Click toggle to see full text
Truncated
    
Full Text #8
    
Text #9
Max Range: 3.1446. Min Range: -3.1446
Max Act: 2.8442. Min Act: -0.1700
Data Index: 7903173 (Open Web Text)
Max Activating Token Index: 902
Click toggle to see full text
Truncated
    
Full Text #9
    
Text #10
Max Range: 3.1446. Min Range: -3.1446
Max Act: 2.8601. Min Act: -0.1700
Data Index: 5393293 (Open Web Text)
Max Activating Token Index: 104
Click toggle to see full text
Truncated
    
Full Text #10
    
Text #11
Max Range: 3.1446. Min Range: -3.1446
Max Act: 2.9077. Min Act: -0.1700
Data Index: 8226544 (Open Web Text)
Max Activating Token Index: 214
Click toggle to see full text
Truncated
    
Full Text #11
    
Text #12
Max Range: 3.1446. Min Range: -3.1446
Max Act: 2.8601. Min Act: -0.1700
Data Index: 6623934 (Open Web Text)
Max Activating Token Index: 366
Click toggle to see full text
Truncated
    
Full Text #12
    
Text #13
Max Range: 3.1446. Min Range: -3.1446
Max Act: 2.8601. Min Act: -0.1700
Data Index: 5491873 (Open Web Text)
Max Activating Token Index: 590
Click toggle to see full text
Truncated
    
Full Text #13
    
Text #14
Max Range: 3.1446. Min Range: -3.1446
Max Act: 2.8601. Min Act: -0.1700
Data Index: 6403413 (Open Web Text)
Max Activating Token Index: 190
Click toggle to see full text
Truncated
    
Full Text #14
    
Text #15
Max Range: 3.1446. Min Range: -3.1446
Max Act: 2.8601. Min Act: -0.1700
Data Index: 2868750 (Open Web Text)
Max Activating Token Index: 544
Click toggle to see full text
Truncated
    
Full Text #15
    
Text #16
Max Range: 3.1446. Min Range: -3.1446
Max Act: 2.8442. Min Act: -0.1700
Data Index: 6191467 (Open Web Text)
Max Activating Token Index: 1002
Click toggle to see full text
Truncated
    
Full Text #16
    
Text #17
Max Range: 3.1446. Min Range: -3.1446
Max Act: 2.8283. Min Act: -0.1700
Data Index: 3403305 (Open Web Text)
Max Activating Token Index: 836
Click toggle to see full text
Truncated
    
Full Text #17
    
Text #18
Max Range: 3.1446. Min Range: -3.1446
Max Act: 2.8442. Min Act: -0.1700
Data Index: 3038239 (Open Web Text)
Max Activating Token Index: 581
Click toggle to see full text
Truncated
    
Full Text #18
    
Text #19
Max Range: 3.1446. Min Range: -3.1446
Max Act: 2.8442. Min Act: -0.1700
Data Index: 7319801 (Open Web Text)
Max Activating Token Index: 656
Click toggle to see full text
Truncated
    
Full Text #19