< Prev | Home | Model | Random | Next >

Model: SoLU Model: 12 Layers, 6144 Neurons per Layer

Dataset: 80% C4 (Web Text) and 20% Python Code

Neuron 5632 in Layer 8

Load this data into an Interactive Neuroscope

See Documentation here

Transformer Lens Loading: HookedTransformer.from_pretrained('solu-12l')



Text #0

Max Range: 2.1215. Min Range: -2.1215

Max Act: 2.1215. Min Act: -0.0000

Data Index: 807373 (C4 (Web Text))

Max Activating Token Index: 950

Click toggle to see full text

Truncated

Full Text #0


Text #1

Max Range: 2.1215. Min Range: -2.1215

Max Act: 2.0471. Min Act: -0.0000

Data Index: 1099725 (C4 (Web Text))

Max Activating Token Index: 922

Click toggle to see full text

Truncated

Full Text #1


Text #2

Max Range: 2.1215. Min Range: -2.1215

Max Act: 1.9875. Min Act: -0.0000

Data Index: 1270333 (C4 (Web Text))

Max Activating Token Index: 834

Click toggle to see full text

Truncated

Full Text #2


Text #3

Max Range: 2.1215. Min Range: -2.1215

Max Act: 1.3589. Min Act: -0.0000

Data Index: 1253281 (C4 (Web Text))

Max Activating Token Index: 661

Click toggle to see full text

Truncated

Full Text #3


Text #4

Max Range: 2.1215. Min Range: -2.1215

Max Act: 1.4434. Min Act: -0.0000

Data Index: 1188229 (C4 (Web Text))

Max Activating Token Index: 403

Click toggle to see full text

Truncated

Full Text #4


Text #5

Max Range: 2.1215. Min Range: -2.1215

Max Act: 1.3731. Min Act: 0.0000

Data Index: 51257 (C4 (Web Text))

Max Activating Token Index: 864

Click toggle to see full text

Truncated

Full Text #5


Text #6

Max Range: 2.1215. Min Range: -2.1215

Max Act: 1.3799. Min Act: -0.0000

Data Index: 278208 (C4 (Web Text))

Max Activating Token Index: 364

Click toggle to see full text

Truncated

Full Text #6


Text #7

Max Range: 2.1215. Min Range: -2.1215

Max Act: 1.3103. Min Act: -0.0000

Data Index: 772928 (C4 (Web Text))

Max Activating Token Index: 1023

Click toggle to see full text

Truncated

Full Text #7


Text #8

Max Range: 2.1215. Min Range: -2.1215

Max Act: 1.1757. Min Act: -0.0000

Data Index: 1307558 (C4 (Web Text))

Max Activating Token Index: 460

Click toggle to see full text

Truncated

Full Text #8


Text #9

Max Range: 2.1215. Min Range: -2.1215

Max Act: 1.2497. Min Act: -0.0000

Data Index: 1359269 (C4 (Web Text))

Max Activating Token Index: 537

Click toggle to see full text

Truncated

Full Text #9


Text #10

Max Range: 2.1215. Min Range: -2.1215

Max Act: 1.2152. Min Act: -0.0000

Data Index: 616073 (C4 (Web Text))

Max Activating Token Index: 331

Click toggle to see full text

Truncated

Full Text #10


Text #11

Max Range: 2.1215. Min Range: -2.1215

Max Act: 1.1417. Min Act: -0.0000

Data Index: 120932 (C4 (Web Text))

Max Activating Token Index: 726

Click toggle to see full text

Truncated

Full Text #11


Text #12

Max Range: 2.1215. Min Range: -2.1215

Max Act: 1.2711. Min Act: -0.0000

Data Index: 1338053 (C4 (Web Text))

Max Activating Token Index: 329

Click toggle to see full text

Truncated

Full Text #12


Text #13

Max Range: 2.1215. Min Range: -2.1215

Max Act: 1.1777. Min Act: -0.0000

Data Index: 865557 (C4 (Web Text))

Max Activating Token Index: 627

Click toggle to see full text

Truncated

Full Text #13


Text #14

Max Range: 2.1215. Min Range: -2.1215

Max Act: 1.1476. Min Act: -0.0000

Data Index: 1127850 (C4 (Web Text))

Max Activating Token Index: 84

Click toggle to see full text

Truncated

Full Text #14


Text #15

Max Range: 2.1215. Min Range: -2.1215

Max Act: 1.0508. Min Act: -0.0000

Data Index: 869316 (C4 (Web Text))

Max Activating Token Index: 887

Click toggle to see full text

Truncated

Full Text #15


Text #16

Max Range: 2.1215. Min Range: -2.1215

Max Act: 1.1178. Min Act: -0.0000

Data Index: 52330 (C4 (Web Text))

Max Activating Token Index: 546

Click toggle to see full text

Truncated

Full Text #16


Text #17

Max Range: 2.1215. Min Range: -2.1215

Max Act: 1.0415. Min Act: -0.0000

Data Index: 134003 (C4 (Web Text))

Max Activating Token Index: 588

Click toggle to see full text

Truncated

Full Text #17


Text #18

Max Range: 2.1215. Min Range: -2.1215

Max Act: 1.1076. Min Act: -0.0000

Data Index: 883230 (C4 (Web Text))

Max Activating Token Index: 963

Click toggle to see full text

Truncated

Full Text #18


Text #19

Max Range: 2.1215. Min Range: -2.1215

Max Act: 1.1324. Min Act: -0.0000

Data Index: 183304 (C4 (Web Text))

Max Activating Token Index: 490

Click toggle to see full text

Truncated

Full Text #19