Model: SoLU Model: 10 Layers, 5120 Neurons per Layer
Dataset: The Pile
Neuron 698 in Layer 6
Transformer Lens Loading: HookedTransformer.from_pretrained('solu-10l-pile')
Text #0
Max Range: 1.3153. Min Range: -1.3153
Max Act: 1.3078. Min Act: -0.0000
Data Index: 150159 (The Pile)
Max Activating Token Index: 616
Click toggle to see full text
Truncated
Full Text #0
Text #1
Max Range: 1.3153. Min Range: -1.3153
Max Act: 1.2920. Min Act: -0.0000
Data Index: 1796902 (The Pile)
Max Activating Token Index: 150
Click toggle to see full text
Truncated
Full Text #1
Text #2
Max Range: 1.3153. Min Range: -1.3153
Max Act: 1.3153. Min Act: -0.0000
Data Index: 522512 (The Pile)
Max Activating Token Index: 792
Click toggle to see full text
Truncated
Full Text #2
Text #3
Max Range: 1.3153. Min Range: -1.3153
Max Act: 1.2140. Min Act: -0.0000
Data Index: 648163 (The Pile)
Max Activating Token Index: 802
Click toggle to see full text
Truncated
Full Text #3
Text #4
Max Range: 1.3153. Min Range: -1.3153
Max Act: 1.3053. Min Act: -0.0000
Data Index: 799736 (The Pile)
Max Activating Token Index: 193
Click toggle to see full text
Truncated
Full Text #4
Text #5
Max Range: 1.3153. Min Range: -1.3153
Max Act: 1.2542. Min Act: -0.0000
Data Index: 20509 (The Pile)
Max Activating Token Index: 431
Click toggle to see full text
Truncated
Full Text #5
Text #6
Max Range: 1.3153. Min Range: -1.3153
Max Act: 1.2965. Min Act: -0.0000
Data Index: 104042 (The Pile)
Max Activating Token Index: 602
Click toggle to see full text
Truncated
Full Text #6
Text #7
Max Range: 1.3153. Min Range: -1.3153
Max Act: 1.2965. Min Act: -0.0000
Data Index: 320170 (The Pile)
Max Activating Token Index: 602
Click toggle to see full text
Truncated
Full Text #7
Text #8
Max Range: 1.3153. Min Range: -1.3153
Max Act: 1.2499. Min Act: -0.0000
Data Index: 186068 (The Pile)
Max Activating Token Index: 714
Click toggle to see full text
Truncated
Full Text #8
Text #9
Max Range: 1.3153. Min Range: -1.3153
Max Act: 1.1722. Min Act: -0.0000
Data Index: 275138 (The Pile)
Max Activating Token Index: 864
Click toggle to see full text
Truncated
Full Text #9
Text #10
Max Range: 1.3153. Min Range: -1.3153
Max Act: 1.2691. Min Act: -0.0000
Data Index: 955204 (The Pile)
Max Activating Token Index: 230
Click toggle to see full text
Truncated
Full Text #10
Text #11
Max Range: 1.3153. Min Range: -1.3153
Max Act: 1.3028. Min Act: -0.0000
Data Index: 1105042 (The Pile)
Max Activating Token Index: 1019
Click toggle to see full text
Truncated
Full Text #11
Text #12
Max Range: 1.3153. Min Range: -1.3153
Max Act: 1.1972. Min Act: -0.0000
Data Index: 1250928 (The Pile)
Max Activating Token Index: 189
Click toggle to see full text
Truncated
Full Text #12
Text #13
Max Range: 1.3153. Min Range: -1.3153
Max Act: 1.1936. Min Act: -0.0000
Data Index: 1238449 (The Pile)
Max Activating Token Index: 643
Click toggle to see full text
Truncated
Full Text #13
Text #14
Max Range: 1.3153. Min Range: -1.3153
Max Act: 1.2287. Min Act: -0.0000
Data Index: 857907 (The Pile)
Max Activating Token Index: 357
Click toggle to see full text
Truncated
Full Text #14
Text #15
Max Range: 1.3153. Min Range: -1.3153
Max Act: 1.1695. Min Act: -0.0000
Data Index: 182915 (The Pile)
Max Activating Token Index: 674
Click toggle to see full text
Truncated
Full Text #15
Text #16
Max Range: 1.3153. Min Range: -1.3153
Max Act: 1.1720. Min Act: -0.0000
Data Index: 1223638 (The Pile)
Max Activating Token Index: 795
Click toggle to see full text
Truncated
Full Text #16
Text #17
Max Range: 1.3153. Min Range: -1.3153
Max Act: 1.2343. Min Act: -0.0000
Data Index: 1681887 (The Pile)
Max Activating Token Index: 212
Click toggle to see full text
Truncated
Full Text #17
Text #18
Max Range: 1.3153. Min Range: -1.3153
Max Act: 1.2026. Min Act: -0.0000
Data Index: 1793141 (The Pile)
Max Activating Token Index: 73
Click toggle to see full text
Truncated
Full Text #18
Text #19
Max Range: 1.3153. Min Range: -1.3153
Max Act: 1.1617. Min Act: -0.0000
Data Index: 1131262 (The Pile)
Max Activating Token Index: 411
Click toggle to see full text
Truncated
Full Text #19