Model: SoLU Model: 10 Layers, 5120 Neurons per Layer
Dataset: The Pile
Neuron 0 in Layer 7 
Transformer Lens Loading: HookedTransformer.from_pretrained('solu-10l-pile')
Text #0
Max Range: 3.1609. Min Range: -3.1609
Max Act: 3.1609. Min Act: -0.0000
Data Index: 766970 (The Pile)
Max Activating Token Index: 953
Click toggle to see full text
Truncated
    
Full Text #0
    
 
Text #1
Max Range: 3.1609. Min Range: -3.1609
Max Act: 3.0041. Min Act: -0.0000
Data Index: 684532 (The Pile)
Max Activating Token Index: 989
Click toggle to see full text
Truncated
    
Full Text #1
    
 
Text #2
Max Range: 3.1609. Min Range: -3.1609
Max Act: 3.0719. Min Act: -0.0000
Data Index: 739799 (The Pile)
Max Activating Token Index: 477
Click toggle to see full text
Truncated
    
Full Text #2
    
 
Text #3
Max Range: 3.1609. Min Range: -3.1609
Max Act: 2.6758. Min Act: -0.0000
Data Index: 248275 (The Pile)
Max Activating Token Index: 557
Click toggle to see full text
Truncated
    
Full Text #3
    
 
Text #4
Max Range: 3.1609. Min Range: -3.1609
Max Act: 2.7998. Min Act: -0.0000
Data Index: 1448045 (The Pile)
Max Activating Token Index: 441
Click toggle to see full text
Truncated
    
Full Text #4
    
 
Text #5
Max Range: 3.1609. Min Range: -3.1609
Max Act: 2.7678. Min Act: -0.0000
Data Index: 192749 (The Pile)
Max Activating Token Index: 356
Click toggle to see full text
Truncated
    
Full Text #5
    
 
Text #6
Max Range: 3.1609. Min Range: -3.1609
Max Act: 2.7560. Min Act: -0.0000
Data Index: 1891332 (The Pile)
Max Activating Token Index: 866
Click toggle to see full text
Truncated
    
Full Text #6
    
 
Text #7
Max Range: 3.1609. Min Range: -3.1609
Max Act: 3.0071. Min Act: -0.0000
Data Index: 684065 (The Pile)
Max Activating Token Index: 920
Click toggle to see full text
Truncated
    
Full Text #7
    
 
Text #8
Max Range: 3.1609. Min Range: -3.1609
Max Act: 2.7600. Min Act: -0.0000
Data Index: 1895111 (The Pile)
Max Activating Token Index: 422
Click toggle to see full text
Truncated
    
Full Text #8
    
 
Text #9
Max Range: 3.1609. Min Range: -3.1609
Max Act: 2.4385. Min Act: -0.0000
Data Index: 931660 (The Pile)
Max Activating Token Index: 1010
Click toggle to see full text
Truncated
    
Full Text #9
    
 
Text #10
Max Range: 3.1609. Min Range: -3.1609
Max Act: 2.4006. Min Act: -0.0000
Data Index: 1618754 (The Pile)
Max Activating Token Index: 190
Click toggle to see full text
Truncated
    
Full Text #10
    
 
Text #11
Max Range: 3.1609. Min Range: -3.1609
Max Act: 2.6491. Min Act: -0.0000
Data Index: 1619090 (The Pile)
Max Activating Token Index: 318
Click toggle to see full text
Truncated
    
Full Text #11
    
 
Text #12
Max Range: 3.1609. Min Range: -3.1609
Max Act: 2.8994. Min Act: -0.0000
Data Index: 490938 (The Pile)
Max Activating Token Index: 1020
Click toggle to see full text
Truncated
    
Full Text #12
    
 
Text #13
Max Range: 3.1609. Min Range: -3.1609
Max Act: 2.6131. Min Act: -0.0000
Data Index: 445268 (The Pile)
Max Activating Token Index: 295
Click toggle to see full text
Truncated
    
Full Text #13
    
 
Text #14
Max Range: 3.1609. Min Range: -3.1609
Max Act: 2.6094. Min Act: -0.0000
Data Index: 1569194 (The Pile)
Max Activating Token Index: 839
Click toggle to see full text
Truncated
    
Full Text #14
    
 
Text #15
Max Range: 3.1609. Min Range: -3.1609
Max Act: 2.7489. Min Act: -0.0000
Data Index: 1919277 (The Pile)
Max Activating Token Index: 779
Click toggle to see full text
Truncated
    
Full Text #15
    
 
Text #16
Max Range: 3.1609. Min Range: -3.1609
Max Act: 2.5966. Min Act: -0.0000
Data Index: 468524 (The Pile)
Max Activating Token Index: 1018
Click toggle to see full text
Truncated
    
Full Text #16
    
 
Text #17
Max Range: 3.1609. Min Range: -3.1609
Max Act: 2.6083. Min Act: -0.0000
Data Index: 1871499 (The Pile)
Max Activating Token Index: 89
Click toggle to see full text
Truncated
    
Full Text #17
    
 
Text #18
Max Range: 3.1609. Min Range: -3.1609
Max Act: 2.7110. Min Act: -0.0000
Data Index: 819049 (The Pile)
Max Activating Token Index: 469
Click toggle to see full text
Truncated
    
Full Text #18
    
 
Text #19
Max Range: 3.1609. Min Range: -3.1609
Max Act: 2.7095. Min Act: -0.0000
Data Index: 716571 (The Pile)
Max Activating Token Index: 466
Click toggle to see full text
Truncated
    
Full Text #19