< Prev | Home | Model | Random | Next >

Model: SoLU Model: 4 Layers, 2048 Neurons per Layer

Dataset: The Pile

Neuron 1480 in Layer 1

Load this data into an Interactive Neuroscope

See Documentation here

Transformer Lens Loading: HookedTransformer.from_pretrained('solu-4l-pile')



Text #0

Max Range: 0.1100. Min Range: -0.1100

Max Act: 0.1100. Min Act: -0.0004

Data Index: 1725385 (The Pile)

Max Activating Token Index: 110

Click toggle to see full text

Truncated

Full Text #0


Text #1

Max Range: 0.1100. Min Range: -0.1100

Max Act: 0.0930. Min Act: -0.0004

Data Index: 1822051 (The Pile)

Max Activating Token Index: 130

Click toggle to see full text

Truncated

Full Text #1


Text #2

Max Range: 0.1100. Min Range: -0.1100

Max Act: 0.0987. Min Act: -0.0004

Data Index: 1905842 (The Pile)

Max Activating Token Index: 471

Click toggle to see full text

Truncated

Full Text #2


Text #3

Max Range: 0.1100. Min Range: -0.1100

Max Act: 0.0835. Min Act: -0.0004

Data Index: 1935180 (The Pile)

Max Activating Token Index: 184

Click toggle to see full text

Truncated

Full Text #3


Text #4

Max Range: 0.1100. Min Range: -0.1100

Max Act: 0.0643. Min Act: -0.0004

Data Index: 934727 (The Pile)

Max Activating Token Index: 284

Click toggle to see full text

Truncated

Full Text #4


Text #5

Max Range: 0.1100. Min Range: -0.1100

Max Act: 0.0621. Min Act: -0.0004

Data Index: 649582 (The Pile)

Max Activating Token Index: 583

Click toggle to see full text

Truncated

Full Text #5


Text #6

Max Range: 0.1100. Min Range: -0.1100

Max Act: 0.0554. Min Act: -0.0004

Data Index: 1304229 (The Pile)

Max Activating Token Index: 844

Click toggle to see full text

Truncated

Full Text #6


Text #7

Max Range: 0.1100. Min Range: -0.1100

Max Act: 0.0583. Min Act: -0.0004

Data Index: 1860748 (The Pile)

Max Activating Token Index: 332

Click toggle to see full text

Truncated

Full Text #7


Text #8

Max Range: 0.1100. Min Range: -0.1100

Max Act: 0.0567. Min Act: -0.0004

Data Index: 1698336 (The Pile)

Max Activating Token Index: 225

Click toggle to see full text

Truncated

Full Text #8


Text #9

Max Range: 0.1100. Min Range: -0.1100

Max Act: 0.0552. Min Act: -0.0004

Data Index: 1271067 (The Pile)

Max Activating Token Index: 579

Click toggle to see full text

Truncated

Full Text #9


Text #10

Max Range: 0.1100. Min Range: -0.1100

Max Act: 0.0540. Min Act: -0.0004

Data Index: 233881 (The Pile)

Max Activating Token Index: 327

Click toggle to see full text

Truncated

Full Text #10


Text #11

Max Range: 0.1100. Min Range: -0.1100

Max Act: 0.0501. Min Act: -0.0004

Data Index: 659570 (The Pile)

Max Activating Token Index: 370

Click toggle to see full text

Truncated

Full Text #11


Text #12

Max Range: 0.1100. Min Range: -0.1100

Max Act: 0.0509. Min Act: -0.0004

Data Index: 1112174 (The Pile)

Max Activating Token Index: 895

Click toggle to see full text

Truncated

Full Text #12


Text #13

Max Range: 0.1100. Min Range: -0.1100

Max Act: 0.0508. Min Act: -0.0004

Data Index: 1356133 (The Pile)

Max Activating Token Index: 327

Click toggle to see full text

Truncated

Full Text #13


Text #14

Max Range: 0.1100. Min Range: -0.1100

Max Act: 0.0463. Min Act: -0.0004

Data Index: 1115436 (The Pile)

Max Activating Token Index: 127

Click toggle to see full text

Truncated

Full Text #14


Text #15

Max Range: 0.1100. Min Range: -0.1100

Max Act: 0.0482. Min Act: -0.0004

Data Index: 216747 (The Pile)

Max Activating Token Index: 361

Click toggle to see full text

Truncated

Full Text #15


Text #16

Max Range: 0.1100. Min Range: -0.1100

Max Act: 0.0491. Min Act: -0.0004

Data Index: 572762 (The Pile)

Max Activating Token Index: 778

Click toggle to see full text

Truncated

Full Text #16


Text #17

Max Range: 0.1100. Min Range: -0.1100

Max Act: 0.0469. Min Act: -0.0004

Data Index: 1903287 (The Pile)

Max Activating Token Index: 746

Click toggle to see full text

Truncated

Full Text #17


Text #18

Max Range: 0.1100. Min Range: -0.1100

Max Act: 0.0463. Min Act: -0.0004

Data Index: 482939 (The Pile)

Max Activating Token Index: 671

Click toggle to see full text

Truncated

Full Text #18


Text #19

Max Range: 0.1100. Min Range: -0.1100

Max Act: 0.0483. Min Act: -0.0003

Data Index: 1430119 (The Pile)

Max Activating Token Index: 191

Click toggle to see full text

Truncated

Full Text #19