Model: SoLU Model: 6 Layers, 3072 Neurons per Layer
Dataset: The Pile
Neuron 205 in Layer 4
Transformer Lens Loading: HookedTransformer.from_pretrained('solu-6l-pile')
Text #0
Max Range: 0.4417. Min Range: -0.4417
Max Act: 0.4417. Min Act: 0.0001
Data Index: 1088790 (The Pile)
Max Activating Token Index: 374
Click toggle to see full text
Truncated
Full Text #0
Text #1
Max Range: 0.4417. Min Range: -0.4417
Max Act: 0.3977. Min Act: 0.0001
Data Index: 59390 (The Pile)
Max Activating Token Index: 534
Click toggle to see full text
Truncated
Full Text #1
Text #2
Max Range: 0.4417. Min Range: -0.4417
Max Act: 0.3814. Min Act: 0.0002
Data Index: 1657132 (The Pile)
Max Activating Token Index: 685
Click toggle to see full text
Truncated
Full Text #2
Text #3
Max Range: 0.4417. Min Range: -0.4417
Max Act: 0.3768. Min Act: 0.0001
Data Index: 1960726 (The Pile)
Max Activating Token Index: 73
Click toggle to see full text
Truncated
Full Text #3
Text #4
Max Range: 0.4417. Min Range: -0.4417
Max Act: 0.3466. Min Act: 0.0001
Data Index: 174281 (The Pile)
Max Activating Token Index: 493
Click toggle to see full text
Truncated
Full Text #4
Text #5
Max Range: 0.4417. Min Range: -0.4417
Max Act: 0.3428. Min Act: -0.0000
Data Index: 132785 (The Pile)
Max Activating Token Index: 775
Click toggle to see full text
Truncated
Full Text #5
Text #6
Max Range: 0.4417. Min Range: -0.4417
Max Act: 0.3226. Min Act: 0.0002
Data Index: 405016 (The Pile)
Max Activating Token Index: 546
Click toggle to see full text
Truncated
Full Text #6
Text #7
Max Range: 0.4417. Min Range: -0.4417
Max Act: 0.3145. Min Act: 0.0001
Data Index: 341941 (The Pile)
Max Activating Token Index: 494
Click toggle to see full text
Truncated
Full Text #7
Text #8
Max Range: 0.4417. Min Range: -0.4417
Max Act: 0.3263. Min Act: 0.0002
Data Index: 633202 (The Pile)
Max Activating Token Index: 69
Click toggle to see full text
Truncated
Full Text #8
Text #9
Max Range: 0.4417. Min Range: -0.4417
Max Act: 0.3137. Min Act: -0.0000
Data Index: 499451 (The Pile)
Max Activating Token Index: 268
Click toggle to see full text
Truncated
Full Text #9
Text #10
Max Range: 0.4417. Min Range: -0.4417
Max Act: 0.3138. Min Act: 0.0001
Data Index: 1049612 (The Pile)
Max Activating Token Index: 875
Click toggle to see full text
Truncated
Full Text #10
Text #11
Max Range: 0.4417. Min Range: -0.4417
Max Act: 0.3137. Min Act: 0.0002
Data Index: 925030 (The Pile)
Max Activating Token Index: 40
Click toggle to see full text
Truncated
Full Text #11
Text #12
Max Range: 0.4417. Min Range: -0.4417
Max Act: 0.3335. Min Act: 0.0002
Data Index: 1572717 (The Pile)
Max Activating Token Index: 657
Click toggle to see full text
Truncated
Full Text #12
Text #13
Max Range: 0.4417. Min Range: -0.4417
Max Act: 0.3113. Min Act: 0.0002
Data Index: 951341 (The Pile)
Max Activating Token Index: 316
Click toggle to see full text
Truncated
Full Text #13
Text #14
Max Range: 0.4417. Min Range: -0.4417
Max Act: 0.3181. Min Act: 0.0001
Data Index: 1025551 (The Pile)
Max Activating Token Index: 144
Click toggle to see full text
Truncated
Full Text #14
Text #15
Max Range: 0.4417. Min Range: -0.4417
Max Act: 0.3293. Min Act: 0.0001
Data Index: 552945 (The Pile)
Max Activating Token Index: 165
Click toggle to see full text
Truncated
Full Text #15
Text #16
Max Range: 0.4417. Min Range: -0.4417
Max Act: 0.3289. Min Act: 0.0001
Data Index: 1502683 (The Pile)
Max Activating Token Index: 1007
Click toggle to see full text
Truncated
Full Text #16
Text #17
Max Range: 0.4417. Min Range: -0.4417
Max Act: 0.3301. Min Act: -0.0000
Data Index: 1210553 (The Pile)
Max Activating Token Index: 428
Click toggle to see full text
Truncated
Full Text #17
Text #18
Max Range: 0.4417. Min Range: -0.4417
Max Act: 0.3054. Min Act: 0.0002
Data Index: 72550 (The Pile)
Max Activating Token Index: 439
Click toggle to see full text
Truncated
Full Text #18
Text #19
Max Range: 0.4417. Min Range: -0.4417
Max Act: 0.3142. Min Act: 0.0002
Data Index: 302441 (The Pile)
Max Activating Token Index: 493
Click toggle to see full text
Truncated
Full Text #19