< Prev | Home | Model | Random | Next >

Model: GPT-2 Small: 12 Layers, 3072 Neurons per Layer

Dataset: Open Web Text

Neuron 2935 in Layer 0

Load this data into an Interactive Neuroscope

See Documentation here

Hooked Transformer Loading: HookedTransformer.from_pretrained('gpt2-small')

Click the toggle to see the full text!



Text #0

Max Range: 0.9810. Min Range: -0.9810

Max Act: 0.9802. Min Act: -0.1422

Data Index: 4903061 (Open Web Text)

Max Activating Token Index: 708

Click toggle to see full text

Truncated

Full Text #0


Text #1

Max Range: 0.9810. Min Range: -0.9810

Max Act: 0.9810. Min Act: -0.1323

Data Index: 19267 (Open Web Text)

Max Activating Token Index: 710

Click toggle to see full text

Truncated

Full Text #1


Text #2

Max Range: 0.9810. Min Range: -0.9810

Max Act: 0.9730. Min Act: -0.1474

Data Index: 2558885 (Open Web Text)

Max Activating Token Index: 721

Click toggle to see full text

Truncated

Full Text #2


Text #3

Max Range: 0.9810. Min Range: -0.9810

Max Act: 0.9802. Min Act: -0.1338

Data Index: 5341233 (Open Web Text)

Max Activating Token Index: 718

Click toggle to see full text

Truncated

Full Text #3


Text #4

Max Range: 0.9810. Min Range: -0.9810

Max Act: 0.9692. Min Act: -0.1317

Data Index: 779432 (Open Web Text)

Max Activating Token Index: 698

Click toggle to see full text

Truncated

Full Text #4


Text #5

Max Range: 0.9810. Min Range: -0.9810

Max Act: 0.9560. Min Act: -0.1397

Data Index: 674422 (Open Web Text)

Max Activating Token Index: 716

Click toggle to see full text

Truncated

Full Text #5


Text #6

Max Range: 0.9810. Min Range: -0.9810

Max Act: 0.9597. Min Act: -0.1406

Data Index: 5946747 (Open Web Text)

Max Activating Token Index: 686

Click toggle to see full text

Truncated

Full Text #6


Text #7

Max Range: 0.9810. Min Range: -0.9810

Max Act: 0.9582. Min Act: -0.1352

Data Index: 5540013 (Open Web Text)

Max Activating Token Index: 726

Click toggle to see full text

Truncated

Full Text #7


Text #8

Max Range: 0.9810. Min Range: -0.9810

Max Act: 0.9534. Min Act: -0.1446

Data Index: 3064016 (Open Web Text)

Max Activating Token Index: 115

Click toggle to see full text

Truncated

Full Text #8


Text #9

Max Range: 0.9810. Min Range: -0.9810

Max Act: 0.9545. Min Act: -0.1299

Data Index: 1210761 (Open Web Text)

Max Activating Token Index: 693

Click toggle to see full text

Truncated

Full Text #9


Text #10

Max Range: 0.9810. Min Range: -0.9810

Max Act: 0.9485. Min Act: -0.1410

Data Index: 685166 (Open Web Text)

Max Activating Token Index: 684

Click toggle to see full text

Truncated

Full Text #10


Text #11

Max Range: 0.9810. Min Range: -0.9810

Max Act: 0.9517. Min Act: -0.1050

Data Index: 4694882 (Open Web Text)

Max Activating Token Index: 711

Click toggle to see full text

Truncated

Full Text #11


Text #12

Max Range: 0.9810. Min Range: -0.9810

Max Act: 0.9445. Min Act: -0.1377

Data Index: 7511142 (Open Web Text)

Max Activating Token Index: 728

Click toggle to see full text

Truncated

Full Text #12


Text #13

Max Range: 0.9810. Min Range: -0.9810

Max Act: 0.9501. Min Act: -0.1367

Data Index: 8068828 (Open Web Text)

Max Activating Token Index: 713

Click toggle to see full text

Truncated

Full Text #13


Text #14

Max Range: 0.9810. Min Range: -0.9810

Max Act: 0.9419. Min Act: -0.1349

Data Index: 5463856 (Open Web Text)

Max Activating Token Index: 700

Click toggle to see full text

Truncated

Full Text #14


Text #15

Max Range: 0.9810. Min Range: -0.9810

Max Act: 0.9420. Min Act: -0.1344

Data Index: 7612133 (Open Web Text)

Max Activating Token Index: 736

Click toggle to see full text

Truncated

Full Text #15


Text #16

Max Range: 0.9810. Min Range: -0.9810

Max Act: 0.9481. Min Act: -0.1439

Data Index: 6121199 (Open Web Text)

Max Activating Token Index: 721

Click toggle to see full text

Truncated

Full Text #16


Text #17

Max Range: 0.9810. Min Range: -0.9810

Max Act: 0.9443. Min Act: -0.1423

Data Index: 885803 (Open Web Text)

Max Activating Token Index: 731

Click toggle to see full text

Truncated

Full Text #17


Text #18

Max Range: 0.9810. Min Range: -0.9810

Max Act: 0.9347. Min Act: -0.1231

Data Index: 1971709 (Open Web Text)

Max Activating Token Index: 708

Click toggle to see full text

Truncated

Full Text #18


Text #19

Max Range: 0.9810. Min Range: -0.9810

Max Act: 0.9463. Min Act: -0.1390

Data Index: 5577452 (Open Web Text)

Max Activating Token Index: 713

Click toggle to see full text

Truncated

Full Text #19