Model: SoLU Model: 6 Layers, 3072 Neurons per Layer
Dataset: 80% C4 (Web Text) and 20% Python Code
Neuron 618 in Layer 3
Transformer Lens Loading: HookedTransformer.from_pretrained('solu-6l')
Text #0
Max Range: 1.0602. Min Range: -1.0602
Max Act: 1.0102. Min Act: -0.0001
Data Index: 965322 (C4 (Web Text))
Max Activating Token Index: 434
Click toggle to see full text
Truncated
Full Text #0
Text #1
Max Range: 1.0602. Min Range: -1.0602
Max Act: 1.0602. Min Act: -0.0001
Data Index: 1345518 (C4 (Web Text))
Max Activating Token Index: 732
Click toggle to see full text
Truncated
Full Text #1
Text #2
Max Range: 1.0602. Min Range: -1.0602
Max Act: 0.8941. Min Act: -0.0001
Data Index: 597846 (C4 (Web Text))
Max Activating Token Index: 1006
Click toggle to see full text
Truncated
Full Text #2
Text #3
Max Range: 1.0602. Min Range: -1.0602
Max Act: 0.8636. Min Act: -0.0001
Data Index: 1128130 (C4 (Web Text))
Max Activating Token Index: 460
Click toggle to see full text
Truncated
Full Text #3
Text #4
Max Range: 1.0602. Min Range: -1.0602
Max Act: 0.8656. Min Act: -0.0001
Data Index: 165875 (C4 (Web Text))
Max Activating Token Index: 720
Click toggle to see full text
Truncated
Full Text #4
Text #5
Max Range: 1.0602. Min Range: -1.0602
Max Act: 0.8262. Min Act: -0.0001
Data Index: 117118 (C4 (Web Text))
Max Activating Token Index: 802
Click toggle to see full text
Truncated
Full Text #5
Text #6
Max Range: 1.0602. Min Range: -1.0602
Max Act: 0.8725. Min Act: -0.0001
Data Index: 1313343 (C4 (Web Text))
Max Activating Token Index: 1008
Click toggle to see full text
Truncated
Full Text #6
Text #7
Max Range: 1.0602. Min Range: -1.0602
Max Act: 0.8444. Min Act: -0.0001
Data Index: 912314 (C4 (Web Text))
Max Activating Token Index: 154
Click toggle to see full text
Truncated
Full Text #7
Text #8
Max Range: 1.0602. Min Range: -1.0602
Max Act: 0.8399. Min Act: -0.0001
Data Index: 1002346 (C4 (Web Text))
Max Activating Token Index: 774
Click toggle to see full text
Truncated
Full Text #8
Text #9
Max Range: 1.0602. Min Range: -1.0602
Max Act: 0.8190. Min Act: -0.0001
Data Index: 1121979 (C4 (Web Text))
Max Activating Token Index: 958
Click toggle to see full text
Truncated
Full Text #9
Text #10
Max Range: 1.0602. Min Range: -1.0602
Max Act: 0.8039. Min Act: -0.0001
Data Index: 1132351 (C4 (Web Text))
Max Activating Token Index: 82
Click toggle to see full text
Truncated
Full Text #10
Text #11
Max Range: 1.0602. Min Range: -1.0602
Max Act: 0.6972. Min Act: -0.0001
Data Index: 369001 (C4 (Web Text))
Max Activating Token Index: 566
Click toggle to see full text
Truncated
Full Text #11
Text #12
Max Range: 1.0602. Min Range: -1.0602
Max Act: 0.8049. Min Act: -0.0001
Data Index: 430714 (C4 (Web Text))
Max Activating Token Index: 411
Click toggle to see full text
Truncated
Full Text #12
Text #13
Max Range: 1.0602. Min Range: -1.0602
Max Act: 0.7846. Min Act: -0.0001
Data Index: 527535 (C4 (Web Text))
Max Activating Token Index: 964
Click toggle to see full text
Truncated
Full Text #13
Text #14
Max Range: 1.0602. Min Range: -1.0602
Max Act: 0.7553. Min Act: -0.0001
Data Index: 526088 (C4 (Web Text))
Max Activating Token Index: 266
Click toggle to see full text
Truncated
Full Text #14
Text #15
Max Range: 1.0602. Min Range: -1.0602
Max Act: 0.7579. Min Act: -0.0001
Data Index: 592118 (C4 (Web Text))
Max Activating Token Index: 537
Click toggle to see full text
Truncated
Full Text #15
Text #16
Max Range: 1.0602. Min Range: -1.0602
Max Act: 0.7904. Min Act: -0.0001
Data Index: 1312311 (C4 (Web Text))
Max Activating Token Index: 945
Click toggle to see full text
Truncated
Full Text #16
Text #17
Max Range: 1.0602. Min Range: -1.0602
Max Act: 0.7567. Min Act: -0.0001
Data Index: 1015066 (C4 (Web Text))
Max Activating Token Index: 528
Click toggle to see full text
Truncated
Full Text #17
Text #18
Max Range: 1.0602. Min Range: -1.0602
Max Act: 0.7477. Min Act: -0.0001
Data Index: 965453 (C4 (Web Text))
Max Activating Token Index: 48
Click toggle to see full text
Truncated
Full Text #18
Text #19
Max Range: 1.0602. Min Range: -1.0602
Max Act: 0.7033. Min Act: -0.0001
Data Index: 668720 (C4 (Web Text))
Max Activating Token Index: 422
Click toggle to see full text
Truncated
Full Text #19