Model: SoLU Model: 10 Layers, 5120 Neurons per Layer
Dataset: 80% C4 (Web Text) and 20% Python Code
Neuron 5006 in Layer 4
Transformer Lens Loading: HookedTransformer.from_pretrained('solu-10l')
Text #0
Max Range: 0.9606. Min Range: -0.9606
Max Act: 0.9606. Min Act: -0.0001
Data Index: 846695 (C4 (Web Text))
Max Activating Token Index: 387
Click toggle to see full text
Truncated
Full Text #0
Text #1
Max Range: 0.9606. Min Range: -0.9606
Max Act: 0.8068. Min Act: -0.0001
Data Index: 251097 (C4 (Web Text))
Max Activating Token Index: 429
Click toggle to see full text
Truncated
Full Text #1
Text #2
Max Range: 0.9606. Min Range: -0.9606
Max Act: 0.7628. Min Act: -0.0001
Data Index: 1219341 (C4 (Web Text))
Max Activating Token Index: 461
Click toggle to see full text
Truncated
Full Text #2
Text #3
Max Range: 0.9606. Min Range: -0.9606
Max Act: 0.6803. Min Act: -0.0001
Data Index: 1074776 (C4 (Web Text))
Max Activating Token Index: 976
Click toggle to see full text
Truncated
Full Text #3
Text #4
Max Range: 0.9606. Min Range: -0.9606
Max Act: 0.7176. Min Act: -0.0001
Data Index: 647158 (C4 (Web Text))
Max Activating Token Index: 898
Click toggle to see full text
Truncated
Full Text #4
Text #5
Max Range: 0.9606. Min Range: -0.9606
Max Act: 0.6480. Min Act: -0.0001
Data Index: 1147399 (C4 (Web Text))
Max Activating Token Index: 990
Click toggle to see full text
Truncated
Full Text #5
Text #6
Max Range: 0.9606. Min Range: -0.9606
Max Act: 0.7602. Min Act: -0.0001
Data Index: 690103 (C4 (Web Text))
Max Activating Token Index: 388
Click toggle to see full text
Truncated
Full Text #6
Text #7
Max Range: 0.9606. Min Range: -0.9606
Max Act: 0.7052. Min Act: -0.0001
Data Index: 343919 (C4 (Web Text))
Max Activating Token Index: 518
Click toggle to see full text
Truncated
Full Text #7
Text #8
Max Range: 0.9606. Min Range: -0.9606
Max Act: 0.7068. Min Act: -0.0001
Data Index: 1196458 (C4 (Web Text))
Max Activating Token Index: 846
Click toggle to see full text
Truncated
Full Text #8
Text #9
Max Range: 0.9606. Min Range: -0.9606
Max Act: 0.7761. Min Act: -0.0001
Data Index: 1028888 (C4 (Web Text))
Max Activating Token Index: 869
Click toggle to see full text
Truncated
Full Text #9
Text #10
Max Range: 0.9606. Min Range: -0.9606
Max Act: 0.6821. Min Act: -0.0001
Data Index: 932901 (C4 (Web Text))
Max Activating Token Index: 897
Click toggle to see full text
Truncated
Full Text #10
Text #11
Max Range: 0.9606. Min Range: -0.9606
Max Act: 0.7296. Min Act: -0.0001
Data Index: 696153 (C4 (Web Text))
Max Activating Token Index: 882
Click toggle to see full text
Truncated
Full Text #11
Text #12
Max Range: 0.9606. Min Range: -0.9606
Max Act: 0.7523. Min Act: -0.0001
Data Index: 8486 (C4 (Web Text))
Max Activating Token Index: 198
Click toggle to see full text
Truncated
Full Text #12
Text #13
Max Range: 0.9606. Min Range: -0.9606
Max Act: 0.7051. Min Act: -0.0001
Data Index: 45616 (C4 (Web Text))
Max Activating Token Index: 724
Click toggle to see full text
Truncated
Full Text #13
Text #14
Max Range: 0.9606. Min Range: -0.9606
Max Act: 0.6811. Min Act: -0.0001
Data Index: 784397 (C4 (Web Text))
Max Activating Token Index: 448
Click toggle to see full text
Truncated
Full Text #14
Text #15
Max Range: 0.9606. Min Range: -0.9606
Max Act: 0.7019. Min Act: -0.0001
Data Index: 696591 (C4 (Web Text))
Max Activating Token Index: 767
Click toggle to see full text
Truncated
Full Text #15
Text #16
Max Range: 0.9606. Min Range: -0.9606
Max Act: 0.6962. Min Act: -0.0001
Data Index: 352474 (C4 (Web Text))
Max Activating Token Index: 358
Click toggle to see full text
Truncated
Full Text #16
Text #17
Max Range: 0.9606. Min Range: -0.9606
Max Act: 0.6303. Min Act: -0.0001
Data Index: 512567 (C4 (Web Text))
Max Activating Token Index: 290
Click toggle to see full text
Truncated
Full Text #17
Text #18
Max Range: 0.9606. Min Range: -0.9606
Max Act: 0.6681. Min Act: -0.0001
Data Index: 1318568 (C4 (Web Text))
Max Activating Token Index: 752
Click toggle to see full text
Truncated
Full Text #18
Text #19
Max Range: 0.9606. Min Range: -0.9606
Max Act: 0.6607. Min Act: -0.0001
Data Index: 481673 (C4 (Web Text))
Max Activating Token Index: 97
Click toggle to see full text
Truncated
Full Text #19