Model: SoLU Model: 6 Layers, 3072 Neurons per Layer
Dataset: The Pile
Neuron 2545 in Layer 4
Transformer Lens Loading: HookedTransformer.from_pretrained('solu-6l-pile')
Text #0
Max Range: 1.6334. Min Range: -1.6334
Max Act: 1.6334. Min Act: -0.0000
Data Index: 1500560 (The Pile)
Max Activating Token Index: 10
Click toggle to see full text
Truncated
Full Text #0
Text #1
Max Range: 1.6334. Min Range: -1.6334
Max Act: 1.4101. Min Act: -0.0000
Data Index: 1431520 (The Pile)
Max Activating Token Index: 794
Click toggle to see full text
Truncated
Full Text #1
Text #2
Max Range: 1.6334. Min Range: -1.6334
Max Act: 1.4083. Min Act: -0.0000
Data Index: 48099 (The Pile)
Max Activating Token Index: 8
Click toggle to see full text
Truncated
Full Text #2
Text #3
Max Range: 1.6334. Min Range: -1.6334
Max Act: 1.3942. Min Act: -0.0000
Data Index: 1533151 (The Pile)
Max Activating Token Index: 7
Click toggle to see full text
Truncated
Full Text #3
Text #4
Max Range: 1.6334. Min Range: -1.6334
Max Act: 1.3465. Min Act: -0.0001
Data Index: 441065 (The Pile)
Max Activating Token Index: 935
Click toggle to see full text
Truncated
Full Text #4
Text #5
Max Range: 1.6334. Min Range: -1.6334
Max Act: 1.3033. Min Act: -0.0000
Data Index: 227355 (The Pile)
Max Activating Token Index: 1018
Click toggle to see full text
Truncated
Full Text #5
Text #6
Max Range: 1.6334. Min Range: -1.6334
Max Act: 1.2559. Min Act: -0.0000
Data Index: 208385 (The Pile)
Max Activating Token Index: 733
Click toggle to see full text
Truncated
Full Text #6
Text #7
Max Range: 1.6334. Min Range: -1.6334
Max Act: 1.3327. Min Act: -0.0000
Data Index: 1739013 (The Pile)
Max Activating Token Index: 229
Click toggle to see full text
Truncated
Full Text #7
Text #8
Max Range: 1.6334. Min Range: -1.6334
Max Act: 1.2922. Min Act: -0.0000
Data Index: 1494706 (The Pile)
Max Activating Token Index: 247
Click toggle to see full text
Truncated
Full Text #8
Text #9
Max Range: 1.6334. Min Range: -1.6334
Max Act: 1.2505. Min Act: -0.0000
Data Index: 108245 (The Pile)
Max Activating Token Index: 672
Click toggle to see full text
Truncated
Full Text #9
Text #10
Max Range: 1.6334. Min Range: -1.6334
Max Act: 1.2664. Min Act: -0.0000
Data Index: 622949 (The Pile)
Max Activating Token Index: 4
Click toggle to see full text
Truncated
Full Text #10
Text #11
Max Range: 1.6334. Min Range: -1.6334
Max Act: 1.2488. Min Act: -0.0000
Data Index: 85794 (The Pile)
Max Activating Token Index: 924
Click toggle to see full text
Truncated
Full Text #11
Text #12
Max Range: 1.6334. Min Range: -1.6334
Max Act: 1.2567. Min Act: -0.0000
Data Index: 1853102 (The Pile)
Max Activating Token Index: 10
Click toggle to see full text
Truncated
Full Text #12
Text #13
Max Range: 1.6334. Min Range: -1.6334
Max Act: 1.2392. Min Act: -0.0000
Data Index: 295437 (The Pile)
Max Activating Token Index: 238
Click toggle to see full text
Truncated
Full Text #13
Text #14
Max Range: 1.6334. Min Range: -1.6334
Max Act: 1.2305. Min Act: -0.0000
Data Index: 430798 (The Pile)
Max Activating Token Index: 14
Click toggle to see full text
Truncated
Full Text #14
Text #15
Max Range: 1.6334. Min Range: -1.6334
Max Act: 1.2610. Min Act: -0.0001
Data Index: 1501015 (The Pile)
Max Activating Token Index: 775
Click toggle to see full text
Truncated
Full Text #15
Text #16
Max Range: 1.6334. Min Range: -1.6334
Max Act: 1.2243. Min Act: -0.0000
Data Index: 1087058 (The Pile)
Max Activating Token Index: 219
Click toggle to see full text
Truncated
Full Text #16
Text #17
Max Range: 1.6334. Min Range: -1.6334
Max Act: 1.2245. Min Act: -0.0000
Data Index: 21911 (The Pile)
Max Activating Token Index: 206
Click toggle to see full text
Truncated
Full Text #17
Text #18
Max Range: 1.6334. Min Range: -1.6334
Max Act: 1.2220. Min Act: -0.0000
Data Index: 713823 (The Pile)
Max Activating Token Index: 883
Click toggle to see full text
Truncated
Full Text #18
Text #19
Max Range: 1.6334. Min Range: -1.6334
Max Act: 1.2177. Min Act: -0.0000
Data Index: 78899 (The Pile)
Max Activating Token Index: 391
Click toggle to see full text
Truncated
Full Text #19