< Prev | Home | Model | Random | Next >

Model: GPT-2 Small: 12 Layers, 3072 Neurons per Layer

Dataset: Open Web Text

Neuron 2606 in Layer 6

Load this data into an Interactive Neuroscope

See Documentation here

Hooked Transformer Loading: HookedTransformer.from_pretrained('gpt2-small')

Click the toggle to see the full text!



Text #0

Max Range: 3.1281. Min Range: -3.1281

Max Act: 3.1281. Min Act: -0.1700

Data Index: 3307393 (Open Web Text)

Max Activating Token Index: 373

Click toggle to see full text

Truncated

Full Text #0


Text #1

Max Range: 3.1281. Min Range: -3.1281

Max Act: 3.0438. Min Act: -0.1700

Data Index: 4190705 (Open Web Text)

Max Activating Token Index: 24

Click toggle to see full text

Truncated

Full Text #1


Text #2

Max Range: 3.1281. Min Range: -3.1281

Max Act: 3.0323. Min Act: -0.1700

Data Index: 7810203 (Open Web Text)

Max Activating Token Index: 349

Click toggle to see full text

Truncated

Full Text #2


Text #3

Max Range: 3.1281. Min Range: -3.1281

Max Act: 2.9954. Min Act: -0.1700

Data Index: 1274622 (Open Web Text)

Max Activating Token Index: 974

Click toggle to see full text

Truncated

Full Text #3


Text #4

Max Range: 3.1281. Min Range: -3.1281

Max Act: 3.0173. Min Act: -0.1700

Data Index: 6879941 (Open Web Text)

Max Activating Token Index: 730

Click toggle to see full text

Truncated

Full Text #4


Text #5

Max Range: 3.1281. Min Range: -3.1281

Max Act: 3.0043. Min Act: -0.1700

Data Index: 1750638 (Open Web Text)

Max Activating Token Index: 204

Click toggle to see full text

Truncated

Full Text #5


Text #6

Max Range: 3.1281. Min Range: -3.1281

Max Act: 2.9781. Min Act: -0.1700

Data Index: 6003085 (Open Web Text)

Max Activating Token Index: 385

Click toggle to see full text

Truncated

Full Text #6


Text #7

Max Range: 3.1281. Min Range: -3.1281

Max Act: 2.9751. Min Act: -0.1700

Data Index: 8423631 (Open Web Text)

Max Activating Token Index: 927

Click toggle to see full text

Truncated

Full Text #7


Text #8

Max Range: 3.1281. Min Range: -3.1281

Max Act: 2.9637. Min Act: -0.1700

Data Index: 5650601 (Open Web Text)

Max Activating Token Index: 893

Click toggle to see full text

Truncated

Full Text #8


Text #9

Max Range: 3.1281. Min Range: -3.1281

Max Act: 2.9559. Min Act: -0.1700

Data Index: 5507000 (Open Web Text)

Max Activating Token Index: 691

Click toggle to see full text

Truncated

Full Text #9


Text #10

Max Range: 3.1281. Min Range: -3.1281

Max Act: 2.9274. Min Act: -0.1700

Data Index: 8427184 (Open Web Text)

Max Activating Token Index: 851

Click toggle to see full text

Truncated

Full Text #10


Text #11

Max Range: 3.1281. Min Range: -3.1281

Max Act: 2.9469. Min Act: -0.1700

Data Index: 4191232 (Open Web Text)

Max Activating Token Index: 277

Click toggle to see full text

Truncated

Full Text #11


Text #12

Max Range: 3.1281. Min Range: -3.1281

Max Act: 2.9266. Min Act: -0.1700

Data Index: 255709 (Open Web Text)

Max Activating Token Index: 896

Click toggle to see full text

Truncated

Full Text #12


Text #13

Max Range: 3.1281. Min Range: -3.1281

Max Act: 2.9072. Min Act: -0.1700

Data Index: 3275325 (Open Web Text)

Max Activating Token Index: 276

Click toggle to see full text

Truncated

Full Text #13


Text #14

Max Range: 3.1281. Min Range: -3.1281

Max Act: 2.9366. Min Act: -0.1700

Data Index: 2231375 (Open Web Text)

Max Activating Token Index: 979

Click toggle to see full text

Truncated

Full Text #14


Text #15

Max Range: 3.1281. Min Range: -3.1281

Max Act: 2.9037. Min Act: -0.1700

Data Index: 5680371 (Open Web Text)

Max Activating Token Index: 929

Click toggle to see full text

Truncated

Full Text #15


Text #16

Max Range: 3.1281. Min Range: -3.1281

Max Act: 2.8975. Min Act: -0.1700

Data Index: 1018231 (Open Web Text)

Max Activating Token Index: 866

Click toggle to see full text

Truncated

Full Text #16


Text #17

Max Range: 3.1281. Min Range: -3.1281

Max Act: 2.9229. Min Act: -0.1700

Data Index: 7573546 (Open Web Text)

Max Activating Token Index: 485

Click toggle to see full text

Truncated

Full Text #17


Text #18

Max Range: 3.1281. Min Range: -3.1281

Max Act: 2.8878. Min Act: -0.1700

Data Index: 717547 (Open Web Text)

Max Activating Token Index: 321

Click toggle to see full text

Truncated

Full Text #18


Text #19

Max Range: 3.1281. Min Range: -3.1281

Max Act: 2.9032. Min Act: -0.1700

Data Index: 281799 (Open Web Text)

Max Activating Token Index: 897

Click toggle to see full text

Truncated

Full Text #19