< Prev | Home | Model | Random | Next >

Model: GPT-2 Small: 12 Layers, 3072 Neurons per Layer

Dataset: Open Web Text

Neuron 2321 in Layer 2

Load this data into an Interactive Neuroscope

See Documentation here

Hooked Transformer Loading: HookedTransformer.from_pretrained('gpt2-small')

Click the toggle to see the full text!



Text #0

Max Range: 3.7252. Min Range: -3.7252

Max Act: 3.7252. Min Act: -0.1700

Data Index: 6329157 (Open Web Text)

Max Activating Token Index: 716

Click toggle to see full text

Truncated

Full Text #0


Text #1

Max Range: 3.7252. Min Range: -3.7252

Max Act: 3.6487. Min Act: -0.1700

Data Index: 2360660 (Open Web Text)

Max Activating Token Index: 454

Click toggle to see full text

Truncated

Full Text #1


Text #2

Max Range: 3.7252. Min Range: -3.7252

Max Act: 3.6038. Min Act: -0.1700

Data Index: 1654825 (Open Web Text)

Max Activating Token Index: 210

Click toggle to see full text

Truncated

Full Text #2


Text #3

Max Range: 3.7252. Min Range: -3.7252

Max Act: 3.5545. Min Act: -0.1700

Data Index: 1298528 (Open Web Text)

Max Activating Token Index: 865

Click toggle to see full text

Truncated

Full Text #3


Text #4

Max Range: 3.7252. Min Range: -3.7252

Max Act: 3.5394. Min Act: -0.1700

Data Index: 7395534 (Open Web Text)

Max Activating Token Index: 517

Click toggle to see full text

Truncated

Full Text #4


Text #5

Max Range: 3.7252. Min Range: -3.7252

Max Act: 3.5293. Min Act: -0.1700

Data Index: 7046389 (Open Web Text)

Max Activating Token Index: 528

Click toggle to see full text

Truncated

Full Text #5


Text #6

Max Range: 3.7252. Min Range: -3.7252

Max Act: 3.4964. Min Act: -0.1700

Data Index: 3909035 (Open Web Text)

Max Activating Token Index: 384

Click toggle to see full text

Truncated

Full Text #6


Text #7

Max Range: 3.7252. Min Range: -3.7252

Max Act: 3.4702. Min Act: -0.1700

Data Index: 7129460 (Open Web Text)

Max Activating Token Index: 366

Click toggle to see full text

Truncated

Full Text #7


Text #8

Max Range: 3.7252. Min Range: -3.7252

Max Act: 3.4872. Min Act: -0.1700

Data Index: 7455804 (Open Web Text)

Max Activating Token Index: 511

Click toggle to see full text

Truncated

Full Text #8


Text #9

Max Range: 3.7252. Min Range: -3.7252

Max Act: 3.4930. Min Act: -0.1700

Data Index: 8661723 (Open Web Text)

Max Activating Token Index: 490

Click toggle to see full text

Truncated

Full Text #9


Text #10

Max Range: 3.7252. Min Range: -3.7252

Max Act: 3.4895. Min Act: -0.1700

Data Index: 484389 (Open Web Text)

Max Activating Token Index: 339

Click toggle to see full text

Truncated

Full Text #10


Text #11

Max Range: 3.7252. Min Range: -3.7252

Max Act: 3.4565. Min Act: -0.1700

Data Index: 5640552 (Open Web Text)

Max Activating Token Index: 264

Click toggle to see full text

Truncated

Full Text #11


Text #12

Max Range: 3.7252. Min Range: -3.7252

Max Act: 3.4749. Min Act: -0.1700

Data Index: 4578033 (Open Web Text)

Max Activating Token Index: 328

Click toggle to see full text

Truncated

Full Text #12


Text #13

Max Range: 3.7252. Min Range: -3.7252

Max Act: 3.4535. Min Act: -0.1700

Data Index: 8744266 (Open Web Text)

Max Activating Token Index: 516

Click toggle to see full text

Truncated

Full Text #13


Text #14

Max Range: 3.7252. Min Range: -3.7252

Max Act: 3.4431. Min Act: -0.1700

Data Index: 8439675 (Open Web Text)

Max Activating Token Index: 489

Click toggle to see full text

Truncated

Full Text #14


Text #15

Max Range: 3.7252. Min Range: -3.7252

Max Act: 3.4435. Min Act: -0.1700

Data Index: 1751569 (Open Web Text)

Max Activating Token Index: 448

Click toggle to see full text

Truncated

Full Text #15


Text #16

Max Range: 3.7252. Min Range: -3.7252

Max Act: 3.4553. Min Act: -0.1700

Data Index: 1744756 (Open Web Text)

Max Activating Token Index: 469

Click toggle to see full text

Truncated

Full Text #16


Text #17

Max Range: 3.7252. Min Range: -3.7252

Max Act: 3.4292. Min Act: -0.1700

Data Index: 4088596 (Open Web Text)

Max Activating Token Index: 327

Click toggle to see full text

Truncated

Full Text #17


Text #18

Max Range: 3.7252. Min Range: -3.7252

Max Act: 3.4096. Min Act: -0.1700

Data Index: 2319639 (Open Web Text)

Max Activating Token Index: 391

Click toggle to see full text

Truncated

Full Text #18


Text #19

Max Range: 3.7252. Min Range: -3.7252

Max Act: 3.4237. Min Act: -0.1700

Data Index: 759262 (Open Web Text)

Max Activating Token Index: 392

Click toggle to see full text

Truncated

Full Text #19