< Prev | Home | Model | Random | Next >

Model: SoLU Model: 10 Layers, 5120 Neurons per Layer

Dataset: The Pile

Neuron 5035 in Layer 5

Load this data into an Interactive Neuroscope

See Documentation here

Transformer Lens Loading: HookedTransformer.from_pretrained('solu-10l-pile')



Text #0

Max Range: 0.5940. Min Range: -0.5940

Max Act: 0.5940. Min Act: -0.0000

Data Index: 1429449 (The Pile)

Max Activating Token Index: 157

Click toggle to see full text

Truncated

Full Text #0


Text #1

Max Range: 0.5940. Min Range: -0.5940

Max Act: 0.5272. Min Act: -0.0000

Data Index: 707811 (The Pile)

Max Activating Token Index: 350

Click toggle to see full text

Truncated

Full Text #1


Text #2

Max Range: 0.5940. Min Range: -0.5940

Max Act: 0.4726. Min Act: -0.0000

Data Index: 1955087 (The Pile)

Max Activating Token Index: 405

Click toggle to see full text

Truncated

Full Text #2


Text #3

Max Range: 0.5940. Min Range: -0.5940

Max Act: 0.4237. Min Act: -0.0000

Data Index: 1598780 (The Pile)

Max Activating Token Index: 817

Click toggle to see full text

Truncated

Full Text #3


Text #4

Max Range: 0.5940. Min Range: -0.5940

Max Act: 0.4639. Min Act: -0.0000

Data Index: 160711 (The Pile)

Max Activating Token Index: 325

Click toggle to see full text

Truncated

Full Text #4


Text #5

Max Range: 0.5940. Min Range: -0.5940

Max Act: 0.4267. Min Act: -0.0000

Data Index: 362132 (The Pile)

Max Activating Token Index: 730

Click toggle to see full text

Truncated

Full Text #5


Text #6

Max Range: 0.5940. Min Range: -0.5940

Max Act: 0.3922. Min Act: -0.0000

Data Index: 1268638 (The Pile)

Max Activating Token Index: 773

Click toggle to see full text

Truncated

Full Text #6


Text #7

Max Range: 0.5940. Min Range: -0.5940

Max Act: 0.3687. Min Act: -0.0000

Data Index: 1342831 (The Pile)

Max Activating Token Index: 797

Click toggle to see full text

Truncated

Full Text #7


Text #8

Max Range: 0.5940. Min Range: -0.5940

Max Act: 0.4191. Min Act: -0.0000

Data Index: 1491236 (The Pile)

Max Activating Token Index: 835

Click toggle to see full text

Truncated

Full Text #8


Text #9

Max Range: 0.5940. Min Range: -0.5940

Max Act: 0.4051. Min Act: -0.0000

Data Index: 122419 (The Pile)

Max Activating Token Index: 194

Click toggle to see full text

Truncated

Full Text #9


Text #10

Max Range: 0.5940. Min Range: -0.5940

Max Act: 0.4300. Min Act: -0.0000

Data Index: 796609 (The Pile)

Max Activating Token Index: 289

Click toggle to see full text

Truncated

Full Text #10


Text #11

Max Range: 0.5940. Min Range: -0.5940

Max Act: 0.3704. Min Act: -0.0000

Data Index: 1054460 (The Pile)

Max Activating Token Index: 95

Click toggle to see full text

Truncated

Full Text #11


Text #12

Max Range: 0.5940. Min Range: -0.5940

Max Act: 0.4050. Min Act: -0.0000

Data Index: 657920 (The Pile)

Max Activating Token Index: 89

Click toggle to see full text

Truncated

Full Text #12


Text #13

Max Range: 0.5940. Min Range: -0.5940

Max Act: 0.4424. Min Act: -0.0000

Data Index: 763878 (The Pile)

Max Activating Token Index: 651

Click toggle to see full text

Truncated

Full Text #13


Text #14

Max Range: 0.5940. Min Range: -0.5940

Max Act: 0.3404. Min Act: -0.0000

Data Index: 7290 (The Pile)

Max Activating Token Index: 198

Click toggle to see full text

Truncated

Full Text #14


Text #15

Max Range: 0.5940. Min Range: -0.5940

Max Act: 0.3167. Min Act: -0.0000

Data Index: 767321 (The Pile)

Max Activating Token Index: 941

Click toggle to see full text

Truncated

Full Text #15


Text #16

Max Range: 0.5940. Min Range: -0.5940

Max Act: 0.3984. Min Act: -0.0000

Data Index: 1505752 (The Pile)

Max Activating Token Index: 510

Click toggle to see full text

Truncated

Full Text #16


Text #17

Max Range: 0.5940. Min Range: -0.5940

Max Act: 0.3981. Min Act: -0.0000

Data Index: 962481 (The Pile)

Max Activating Token Index: 135

Click toggle to see full text

Truncated

Full Text #17


Text #18

Max Range: 0.5940. Min Range: -0.5940

Max Act: 0.4458. Min Act: -0.0000

Data Index: 1470004 (The Pile)

Max Activating Token Index: 96

Click toggle to see full text

Truncated

Full Text #18


Text #19

Max Range: 0.5940. Min Range: -0.5940

Max Act: 0.3896. Min Act: -0.0000

Data Index: 1334842 (The Pile)

Max Activating Token Index: 537

Click toggle to see full text

Truncated

Full Text #19