< Prev | Home | Model | Random | Next >

Model: GPT-2 Small: 12 Layers, 3072 Neurons per Layer

Dataset: Open Web Text

Neuron 2710 in Layer 0

Load this data into an Interactive Neuroscope

See Documentation here

Hooked Transformer Loading: HookedTransformer.from_pretrained('gpt2-small')

Click the toggle to see the full text!



Text #0

Max Range: 0.9458. Min Range: -0.9458

Max Act: 0.9458. Min Act: -0.1264

Data Index: 6498484 (Open Web Text)

Max Activating Token Index: 397

Click toggle to see full text

Truncated

Full Text #0


Text #1

Max Range: 0.9458. Min Range: -0.9458

Max Act: 0.9445. Min Act: -0.0582

Data Index: 3801961 (Open Web Text)

Max Activating Token Index: 427

Click toggle to see full text

Truncated

Full Text #1


Text #2

Max Range: 0.9458. Min Range: -0.9458

Max Act: 0.9401. Min Act: -0.0795

Data Index: 7427837 (Open Web Text)

Max Activating Token Index: 585

Click toggle to see full text

Truncated

Full Text #2


Text #3

Max Range: 0.9458. Min Range: -0.9458

Max Act: 0.9370. Min Act: -0.0433

Data Index: 8422387 (Open Web Text)

Max Activating Token Index: 65

Click toggle to see full text

Truncated

Full Text #3


Text #4

Max Range: 0.9458. Min Range: -0.9458

Max Act: 0.9251. Min Act: -0.0707

Data Index: 2828770 (Open Web Text)

Max Activating Token Index: 662

Click toggle to see full text

Truncated

Full Text #4


Text #5

Max Range: 0.9458. Min Range: -0.9458

Max Act: 0.9297. Min Act: -0.0742

Data Index: 8625240 (Open Web Text)

Max Activating Token Index: 298

Click toggle to see full text

Truncated

Full Text #5


Text #6

Max Range: 0.9458. Min Range: -0.9458

Max Act: 0.9266. Min Act: -0.0735

Data Index: 4680815 (Open Web Text)

Max Activating Token Index: 726

Click toggle to see full text

Truncated

Full Text #6


Text #7

Max Range: 0.9458. Min Range: -0.9458

Max Act: 0.9286. Min Act: -0.0994

Data Index: 6162754 (Open Web Text)

Max Activating Token Index: 88

Click toggle to see full text

Truncated

Full Text #7


Text #8

Max Range: 0.9458. Min Range: -0.9458

Max Act: 0.9246. Min Act: -0.0993

Data Index: 519501 (Open Web Text)

Max Activating Token Index: 790

Click toggle to see full text

Truncated

Full Text #8


Text #9

Max Range: 0.9458. Min Range: -0.9458

Max Act: 0.9306. Min Act: -0.0844

Data Index: 2336669 (Open Web Text)

Max Activating Token Index: 810

Click toggle to see full text

Truncated

Full Text #9


Text #10

Max Range: 0.9458. Min Range: -0.9458

Max Act: 0.9275. Min Act: -0.0708

Data Index: 7067341 (Open Web Text)

Max Activating Token Index: 546

Click toggle to see full text

Truncated

Full Text #10


Text #11

Max Range: 0.9458. Min Range: -0.9458

Max Act: 0.9324. Min Act: -0.0726

Data Index: 5947927 (Open Web Text)

Max Activating Token Index: 92

Click toggle to see full text

Truncated

Full Text #11


Text #12

Max Range: 0.9458. Min Range: -0.9458

Max Act: 0.9306. Min Act: -0.0726

Data Index: 4715177 (Open Web Text)

Max Activating Token Index: 398

Click toggle to see full text

Truncated

Full Text #12


Text #13

Max Range: 0.9458. Min Range: -0.9458

Max Act: 0.9178. Min Act: -0.0602

Data Index: 5379818 (Open Web Text)

Max Activating Token Index: 371

Click toggle to see full text

Truncated

Full Text #13


Text #14

Max Range: 0.9458. Min Range: -0.9458

Max Act: 0.9188. Min Act: -0.0549

Data Index: 7375481 (Open Web Text)

Max Activating Token Index: 46

Click toggle to see full text

Truncated

Full Text #14


Text #15

Max Range: 0.9458. Min Range: -0.9458

Max Act: 0.9213. Min Act: -0.0714

Data Index: 4564982 (Open Web Text)

Max Activating Token Index: 384

Click toggle to see full text

Truncated

Full Text #15


Text #16

Max Range: 0.9458. Min Range: -0.9458

Max Act: 0.9298. Min Act: -0.0564

Data Index: 3639161 (Open Web Text)

Max Activating Token Index: 457

Click toggle to see full text

Truncated

Full Text #16


Text #17

Max Range: 0.9458. Min Range: -0.9458

Max Act: 0.9199. Min Act: -0.0684

Data Index: 4122620 (Open Web Text)

Max Activating Token Index: 550

Click toggle to see full text

Truncated

Full Text #17


Text #18

Max Range: 0.9458. Min Range: -0.9458

Max Act: 0.9174. Min Act: -0.0707

Data Index: 1882236 (Open Web Text)

Max Activating Token Index: 454

Click toggle to see full text

Truncated

Full Text #18


Text #19

Max Range: 0.9458. Min Range: -0.9458

Max Act: 0.9149. Min Act: -0.0829

Data Index: 8193114 (Open Web Text)

Max Activating Token Index: 119

Click toggle to see full text

Truncated

Full Text #19