< Prev | Home | Model | Random | Next >

Model: SoLU Model: 12 Layers, 6144 Neurons per Layer

Dataset: 80% C4 (Web Text) and 20% Python Code

Neuron 5614 in Layer 11

Load this data into an Interactive Neuroscope

See Documentation here

Transformer Lens Loading: HookedTransformer.from_pretrained('solu-12l')



Text #0

Max Range: 2.5781. Min Range: -2.5781

Max Act: 2.5781. Min Act: -0.0000

Data Index: 833972 (C4 (Web Text))

Max Activating Token Index: 891

Click toggle to see full text

Truncated

Full Text #0


Text #1

Max Range: 2.5781. Min Range: -2.5781

Max Act: 1.9740. Min Act: -0.0000

Data Index: 285242 (C4 (Web Text))

Max Activating Token Index: 506

Click toggle to see full text

Truncated

Full Text #1


Text #2

Max Range: 2.5781. Min Range: -2.5781

Max Act: 1.9444. Min Act: -0.0000

Data Index: 200752 (C4 (Web Text))

Max Activating Token Index: 842

Click toggle to see full text

Truncated

Full Text #2


Text #3

Max Range: 2.5781. Min Range: -2.5781

Max Act: 1.7954. Min Act: -0.0000

Data Index: 1131323 (C4 (Web Text))

Max Activating Token Index: 80

Click toggle to see full text

Truncated

Full Text #3


Text #4

Max Range: 2.5781. Min Range: -2.5781

Max Act: 1.4430. Min Act: -0.0000

Data Index: 1135527 (C4 (Web Text))

Max Activating Token Index: 668

Click toggle to see full text

Truncated

Full Text #4


Text #5

Max Range: 2.5781. Min Range: -2.5781

Max Act: 1.7027. Min Act: -0.0000

Data Index: 501311 (C4 (Web Text))

Max Activating Token Index: 769

Click toggle to see full text

Truncated

Full Text #5


Text #6

Max Range: 2.5781. Min Range: -2.5781

Max Act: 1.6199. Min Act: -0.0000

Data Index: 800298 (C4 (Web Text))

Max Activating Token Index: 502

Click toggle to see full text

Truncated

Full Text #6


Text #7

Max Range: 2.5781. Min Range: -2.5781

Max Act: 1.7193. Min Act: -0.0000

Data Index: 116080 (C4 (Web Text))

Max Activating Token Index: 265

Click toggle to see full text

Truncated

Full Text #7


Text #8

Max Range: 2.5781. Min Range: -2.5781

Max Act: 1.6690. Min Act: -0.0000

Data Index: 1434475 (Python Code)

Max Activating Token Index: 773

Click toggle to see full text

Truncated

Full Text #8


Text #9

Max Range: 2.5781. Min Range: -2.5781

Max Act: 1.4384. Min Act: -0.0000

Data Index: 1538430 (Python Code)

Max Activating Token Index: 523

Click toggle to see full text

Truncated

Full Text #9


Text #10

Max Range: 2.5781. Min Range: -2.5781

Max Act: 1.5832. Min Act: -0.0000

Data Index: 1295243 (C4 (Web Text))

Max Activating Token Index: 715

Click toggle to see full text

Truncated

Full Text #10


Text #11

Max Range: 2.5781. Min Range: -2.5781

Max Act: 1.2247. Min Act: -0.0000

Data Index: 1302008 (C4 (Web Text))

Max Activating Token Index: 379

Click toggle to see full text

Truncated

Full Text #11


Text #12

Max Range: 2.5781. Min Range: -2.5781

Max Act: 1.4662. Min Act: -0.0000

Data Index: 630797 (C4 (Web Text))

Max Activating Token Index: 398

Click toggle to see full text

Truncated

Full Text #12


Text #13

Max Range: 2.5781. Min Range: -2.5781

Max Act: 1.2901. Min Act: -0.0000

Data Index: 994313 (C4 (Web Text))

Max Activating Token Index: 567

Click toggle to see full text

Truncated

Full Text #13


Text #14

Max Range: 2.5781. Min Range: -2.5781

Max Act: 1.2979. Min Act: -0.0000

Data Index: 394551 (C4 (Web Text))

Max Activating Token Index: 247

Click toggle to see full text

Truncated

Full Text #14


Text #15

Max Range: 2.5781. Min Range: -2.5781

Max Act: 1.4479. Min Act: -0.0000

Data Index: 1030190 (C4 (Web Text))

Max Activating Token Index: 232

Click toggle to see full text

Truncated

Full Text #15


Text #16

Max Range: 2.5781. Min Range: -2.5781

Max Act: 1.4149. Min Act: -0.0000

Data Index: 147598 (C4 (Web Text))

Max Activating Token Index: 107

Click toggle to see full text

Truncated

Full Text #16


Text #17

Max Range: 2.5781. Min Range: -2.5781

Max Act: 1.6003. Min Act: -0.0000

Data Index: 1085759 (C4 (Web Text))

Max Activating Token Index: 367

Click toggle to see full text

Truncated

Full Text #17


Text #18

Max Range: 2.5781. Min Range: -2.5781

Max Act: 1.4675. Min Act: -0.0000

Data Index: 693662 (C4 (Web Text))

Max Activating Token Index: 493

Click toggle to see full text

Truncated

Full Text #18


Text #19

Max Range: 2.5781. Min Range: -2.5781

Max Act: 1.4695. Min Act: -0.0000

Data Index: 327606 (C4 (Web Text))

Max Activating Token Index: 635

Click toggle to see full text

Truncated

Full Text #19