< Prev | Home | Model | Random | Next >

Model: SoLU Model: 12 Layers, 6144 Neurons per Layer

Dataset: 80% C4 (Web Text) and 20% Python Code

Neuron 652 in Layer 10

Load this data into an Interactive Neuroscope

See Documentation here

Transformer Lens Loading: HookedTransformer.from_pretrained('solu-12l')



Text #0

Max Range: 6.7422. Min Range: -6.7422

Max Act: 6.7422. Min Act: -0.0000

Data Index: 1000782 (C4 (Web Text))

Max Activating Token Index: 671

Click toggle to see full text

Truncated

Full Text #0


Text #1

Max Range: 6.7422. Min Range: -6.7422

Max Act: 6.5005. Min Act: -0.0000

Data Index: 1326477 (C4 (Web Text))

Max Activating Token Index: 414

Click toggle to see full text

Truncated

Full Text #1


Text #2

Max Range: 6.7422. Min Range: -6.7422

Max Act: 6.7081. Min Act: -0.0000

Data Index: 329743 (C4 (Web Text))

Max Activating Token Index: 660

Click toggle to see full text

Truncated

Full Text #2


Text #3

Max Range: 6.7422. Min Range: -6.7422

Max Act: 6.3202. Min Act: -0.0000

Data Index: 804525 (C4 (Web Text))

Max Activating Token Index: 372

Click toggle to see full text

Truncated

Full Text #3


Text #4

Max Range: 6.7422. Min Range: -6.7422

Max Act: 6.3772. Min Act: -0.0000

Data Index: 439037 (C4 (Web Text))

Max Activating Token Index: 989

Click toggle to see full text

Truncated

Full Text #4


Text #5

Max Range: 6.7422. Min Range: -6.7422

Max Act: 6.0318. Min Act: -0.0000

Data Index: 1335104 (C4 (Web Text))

Max Activating Token Index: 806

Click toggle to see full text

Truncated

Full Text #5


Text #6

Max Range: 6.7422. Min Range: -6.7422

Max Act: 5.8302. Min Act: -0.0000

Data Index: 462938 (C4 (Web Text))

Max Activating Token Index: 854

Click toggle to see full text

Truncated

Full Text #6


Text #7

Max Range: 6.7422. Min Range: -6.7422

Max Act: 5.9988. Min Act: -0.0000

Data Index: 1058025 (C4 (Web Text))

Max Activating Token Index: 562

Click toggle to see full text

Truncated

Full Text #7


Text #8

Max Range: 6.7422. Min Range: -6.7422

Max Act: 6.0132. Min Act: -0.0000

Data Index: 105996 (C4 (Web Text))

Max Activating Token Index: 883

Click toggle to see full text

Truncated

Full Text #8


Text #9

Max Range: 6.7422. Min Range: -6.7422

Max Act: 6.1500. Min Act: -0.0000

Data Index: 1221925 (C4 (Web Text))

Max Activating Token Index: 925

Click toggle to see full text

Truncated

Full Text #9


Text #10

Max Range: 6.7422. Min Range: -6.7422

Max Act: 6.0061. Min Act: -0.0000

Data Index: 477176 (C4 (Web Text))

Max Activating Token Index: 835

Click toggle to see full text

Truncated

Full Text #10


Text #11

Max Range: 6.7422. Min Range: -6.7422

Max Act: 5.8036. Min Act: -0.0000

Data Index: 597636 (C4 (Web Text))

Max Activating Token Index: 1003

Click toggle to see full text

Truncated

Full Text #11


Text #12

Max Range: 6.7422. Min Range: -6.7422

Max Act: 5.9513. Min Act: -0.0000

Data Index: 1348928 (C4 (Web Text))

Max Activating Token Index: 972

Click toggle to see full text

Truncated

Full Text #12


Text #13

Max Range: 6.7422. Min Range: -6.7422

Max Act: 5.7056. Min Act: -0.0000

Data Index: 235448 (C4 (Web Text))

Max Activating Token Index: 772

Click toggle to see full text

Truncated

Full Text #13


Text #14

Max Range: 6.7422. Min Range: -6.7422

Max Act: 5.8427. Min Act: -0.0000

Data Index: 857004 (C4 (Web Text))

Max Activating Token Index: 891

Click toggle to see full text

Truncated

Full Text #14


Text #15

Max Range: 6.7422. Min Range: -6.7422

Max Act: 5.6574. Min Act: -0.0000

Data Index: 1139287 (C4 (Web Text))

Max Activating Token Index: 977

Click toggle to see full text

Truncated

Full Text #15


Text #16

Max Range: 6.7422. Min Range: -6.7422

Max Act: 5.8262. Min Act: -0.0000

Data Index: 443361 (C4 (Web Text))

Max Activating Token Index: 798

Click toggle to see full text

Truncated

Full Text #16


Text #17

Max Range: 6.7422. Min Range: -6.7422

Max Act: 5.7332. Min Act: -0.0000

Data Index: 1286254 (C4 (Web Text))

Max Activating Token Index: 754

Click toggle to see full text

Truncated

Full Text #17


Text #18

Max Range: 6.7422. Min Range: -6.7422

Max Act: 5.7900. Min Act: -0.0000

Data Index: 1088454 (C4 (Web Text))

Max Activating Token Index: 913

Click toggle to see full text

Truncated

Full Text #18


Text #19

Max Range: 6.7422. Min Range: -6.7422

Max Act: 5.5370. Min Act: -0.0000

Data Index: 217166 (C4 (Web Text))

Max Activating Token Index: 522

Click toggle to see full text

Truncated

Full Text #19