Model: SoLU Model: 10 Layers, 5120 Neurons per Layer
Dataset: The Pile
Neuron 5104 in Layer 5 
Transformer Lens Loading: HookedTransformer.from_pretrained('solu-10l-pile')
Text #0
Max Range: 0.6540. Min Range: -0.6540
Max Act: 0.6540. Min Act: -0.0000
Data Index: 252918 (The Pile)
Max Activating Token Index: 1011
Click toggle to see full text
Truncated
    
Full Text #0
    
 
Text #1
Max Range: 0.6540. Min Range: -0.6540
Max Act: 0.6062. Min Act: -0.0000
Data Index: 1995430 (The Pile)
Max Activating Token Index: 617
Click toggle to see full text
Truncated
    
Full Text #1
    
 
Text #2
Max Range: 0.6540. Min Range: -0.6540
Max Act: 0.5673. Min Act: -0.0000
Data Index: 138756 (The Pile)
Max Activating Token Index: 835
Click toggle to see full text
Truncated
    
Full Text #2
    
 
Text #3
Max Range: 0.6540. Min Range: -0.6540
Max Act: 0.5150. Min Act: -0.0000
Data Index: 1905002 (The Pile)
Max Activating Token Index: 382
Click toggle to see full text
Truncated
    
Full Text #3
    
 
Text #4
Max Range: 0.6540. Min Range: -0.6540
Max Act: 0.5214. Min Act: -0.0000
Data Index: 1389999 (The Pile)
Max Activating Token Index: 888
Click toggle to see full text
Truncated
    
Full Text #4
    
 
Text #5
Max Range: 0.6540. Min Range: -0.6540
Max Act: 0.4861. Min Act: -0.0000
Data Index: 636571 (The Pile)
Max Activating Token Index: 683
Click toggle to see full text
Truncated
    
Full Text #5
    
 
Text #6
Max Range: 0.6540. Min Range: -0.6540
Max Act: 0.5296. Min Act: -0.0000
Data Index: 1021448 (The Pile)
Max Activating Token Index: 805
Click toggle to see full text
Truncated
    
Full Text #6
    
 
Text #7
Max Range: 0.6540. Min Range: -0.6540
Max Act: 0.4620. Min Act: -0.0000
Data Index: 1933352 (The Pile)
Max Activating Token Index: 419
Click toggle to see full text
Truncated
    
Full Text #7
    
 
Text #8
Max Range: 0.6540. Min Range: -0.6540
Max Act: 0.4708. Min Act: -0.0000
Data Index: 861159 (The Pile)
Max Activating Token Index: 311
Click toggle to see full text
Truncated
    
Full Text #8
    
 
Text #9
Max Range: 0.6540. Min Range: -0.6540
Max Act: 0.4516. Min Act: -0.0000
Data Index: 232209 (The Pile)
Max Activating Token Index: 997
Click toggle to see full text
Truncated
    
Full Text #9
    
 
Text #10
Max Range: 0.6540. Min Range: -0.6540
Max Act: 0.4603. Min Act: -0.0000
Data Index: 453511 (The Pile)
Max Activating Token Index: 552
Click toggle to see full text
Truncated
    
Full Text #10
    
 
Text #11
Max Range: 0.6540. Min Range: -0.6540
Max Act: 0.4165. Min Act: -0.0000
Data Index: 1999566 (The Pile)
Max Activating Token Index: 735
Click toggle to see full text
Truncated
    
Full Text #11
    
 
Text #12
Max Range: 0.6540. Min Range: -0.6540
Max Act: 0.4528. Min Act: -0.0000
Data Index: 541342 (The Pile)
Max Activating Token Index: 338
Click toggle to see full text
Truncated
    
Full Text #12
    
 
Text #13
Max Range: 0.6540. Min Range: -0.6540
Max Act: 0.4506. Min Act: -0.0000
Data Index: 1478058 (The Pile)
Max Activating Token Index: 182
Click toggle to see full text
Truncated
    
Full Text #13
    
 
Text #14
Max Range: 0.6540. Min Range: -0.6540
Max Act: 0.4364. Min Act: -0.0000
Data Index: 1961995 (The Pile)
Max Activating Token Index: 44
Click toggle to see full text
Truncated
    
Full Text #14
    
 
Text #15
Max Range: 0.6540. Min Range: -0.6540
Max Act: 0.4719. Min Act: -0.0000
Data Index: 369323 (The Pile)
Max Activating Token Index: 617
Click toggle to see full text
Truncated
    
Full Text #15
    
 
Text #16
Max Range: 0.6540. Min Range: -0.6540
Max Act: 0.4634. Min Act: -0.0000
Data Index: 1698728 (The Pile)
Max Activating Token Index: 626
Click toggle to see full text
Truncated
    
Full Text #16
    
 
Text #17
Max Range: 0.6540. Min Range: -0.6540
Max Act: 0.4445. Min Act: -0.0000
Data Index: 1659358 (The Pile)
Max Activating Token Index: 924
Click toggle to see full text
Truncated
    
Full Text #17
    
 
Text #18
Max Range: 0.6540. Min Range: -0.6540
Max Act: 0.4452. Min Act: -0.0000
Data Index: 1903813 (The Pile)
Max Activating Token Index: 465
Click toggle to see full text
Truncated
    
Full Text #18
    
 
Text #19
Max Range: 0.6540. Min Range: -0.6540
Max Act: 0.4398. Min Act: -0.0000
Data Index: 1354467 (The Pile)
Max Activating Token Index: 759
Click toggle to see full text
Truncated
    
Full Text #19