Model: SoLU Model: 4 Layers, 2048 Neurons per Layer
Dataset: The Pile
Neuron 373 in Layer 1
Transformer Lens Loading: HookedTransformer.from_pretrained('solu-4l-pile')
Text #0
Max Range: 0.2038. Min Range: -0.2038
Max Act: 0.2038. Min Act: -0.0004
Data Index: 637901 (The Pile)
Max Activating Token Index: 787
Click toggle to see full text
Truncated
Full Text #0
Text #1
Max Range: 0.2038. Min Range: -0.2038
Max Act: 0.1961. Min Act: -0.0004
Data Index: 1299521 (The Pile)
Max Activating Token Index: 293
Click toggle to see full text
Truncated
Full Text #1
Text #2
Max Range: 0.2038. Min Range: -0.2038
Max Act: 0.2031. Min Act: -0.0004
Data Index: 240330 (The Pile)
Max Activating Token Index: 269
Click toggle to see full text
Truncated
Full Text #2
Text #3
Max Range: 0.2038. Min Range: -0.2038
Max Act: 0.1876. Min Act: -0.0004
Data Index: 862852 (The Pile)
Max Activating Token Index: 396
Click toggle to see full text
Truncated
Full Text #3
Text #4
Max Range: 0.2038. Min Range: -0.2038
Max Act: 0.1829. Min Act: -0.0004
Data Index: 1087078 (The Pile)
Max Activating Token Index: 423
Click toggle to see full text
Truncated
Full Text #4
Text #5
Max Range: 0.2038. Min Range: -0.2038
Max Act: 0.1798. Min Act: -0.0004
Data Index: 861290 (The Pile)
Max Activating Token Index: 469
Click toggle to see full text
Truncated
Full Text #5
Text #6
Max Range: 0.2038. Min Range: -0.2038
Max Act: 0.1866. Min Act: -0.0004
Data Index: 1324123 (The Pile)
Max Activating Token Index: 71
Click toggle to see full text
Truncated
Full Text #6
Text #7
Max Range: 0.2038. Min Range: -0.2038
Max Act: 0.1851. Min Act: -0.0004
Data Index: 935232 (The Pile)
Max Activating Token Index: 373
Click toggle to see full text
Truncated
Full Text #7
Text #8
Max Range: 0.2038. Min Range: -0.2038
Max Act: 0.1853. Min Act: -0.0004
Data Index: 1275018 (The Pile)
Max Activating Token Index: 731
Click toggle to see full text
Truncated
Full Text #8
Text #9
Max Range: 0.2038. Min Range: -0.2038
Max Act: 0.1902. Min Act: -0.0003
Data Index: 112556 (The Pile)
Max Activating Token Index: 542
Click toggle to see full text
Truncated
Full Text #9
Text #10
Max Range: 0.2038. Min Range: -0.2038
Max Act: 0.1900. Min Act: -0.0004
Data Index: 417480 (The Pile)
Max Activating Token Index: 524
Click toggle to see full text
Truncated
Full Text #10
Text #11
Max Range: 0.2038. Min Range: -0.2038
Max Act: 0.1826. Min Act: -0.0004
Data Index: 746156 (The Pile)
Max Activating Token Index: 694
Click toggle to see full text
Truncated
Full Text #11
Text #12
Max Range: 0.2038. Min Range: -0.2038
Max Act: 0.1819. Min Act: -0.0004
Data Index: 58494 (The Pile)
Max Activating Token Index: 996
Click toggle to see full text
Truncated
Full Text #12
Text #13
Max Range: 0.2038. Min Range: -0.2038
Max Act: 0.1784. Min Act: -0.0004
Data Index: 1554669 (The Pile)
Max Activating Token Index: 396
Click toggle to see full text
Truncated
Full Text #13
Text #14
Max Range: 0.2038. Min Range: -0.2038
Max Act: 0.1787. Min Act: -0.0004
Data Index: 1041797 (The Pile)
Max Activating Token Index: 879
Click toggle to see full text
Truncated
Full Text #14
Text #15
Max Range: 0.2038. Min Range: -0.2038
Max Act: 0.1818. Min Act: -0.0004
Data Index: 999033 (The Pile)
Max Activating Token Index: 437
Click toggle to see full text
Truncated
Full Text #15
Text #16
Max Range: 0.2038. Min Range: -0.2038
Max Act: 0.1838. Min Act: -0.0004
Data Index: 152093 (The Pile)
Max Activating Token Index: 731
Click toggle to see full text
Truncated
Full Text #16
Text #17
Max Range: 0.2038. Min Range: -0.2038
Max Act: 0.1832. Min Act: -0.0004
Data Index: 1747165 (The Pile)
Max Activating Token Index: 158
Click toggle to see full text
Truncated
Full Text #17
Text #18
Max Range: 0.2038. Min Range: -0.2038
Max Act: 0.1798. Min Act: -0.0004
Data Index: 1427049 (The Pile)
Max Activating Token Index: 340
Click toggle to see full text
Truncated
Full Text #18
Text #19
Max Range: 0.2038. Min Range: -0.2038
Max Act: 0.1795. Min Act: -0.0003
Data Index: 1883137 (The Pile)
Max Activating Token Index: 315
Click toggle to see full text
Truncated
Full Text #19