< Prev | Home | Model | Random | Next >

Model: GPT-2 Medium: 24 Layers, 4096 Neurons per Layer

Dataset: Open Web Text

Neuron 192 in Layer 1

Load this data into an Interactive Neuroscope

See Documentation here

Transformer Lens Loading: HookedTransformer.from_pretrained('gpt2-medium')



Text #0

Max Range: 1.3226. Min Range: -1.3226

Max Act: 1.3226. Min Act: -0.1698

Data Index: 3730920 (Open Web Text)

Max Activating Token Index: 415

Click toggle to see full text

Truncated

Full Text #0


Text #1

Max Range: 1.3226. Min Range: -1.3226

Max Act: 1.3226. Min Act: -0.1699

Data Index: 1141753 (Open Web Text)

Max Activating Token Index: 169

Click toggle to see full text

Truncated

Full Text #1


Text #2

Max Range: 1.3226. Min Range: -1.3226

Max Act: 1.2873. Min Act: -0.1699

Data Index: 8764972 (Open Web Text)

Max Activating Token Index: 98

Click toggle to see full text

Truncated

Full Text #2


Text #3

Max Range: 1.3226. Min Range: -1.3226

Max Act: 1.3049. Min Act: -0.1685

Data Index: 2171940 (Open Web Text)

Max Activating Token Index: 220

Click toggle to see full text

Truncated

Full Text #3


Text #4

Max Range: 1.3226. Min Range: -1.3226

Max Act: 1.2873. Min Act: -0.1700

Data Index: 1971687 (Open Web Text)

Max Activating Token Index: 914

Click toggle to see full text

Truncated

Full Text #4


Text #5

Max Range: 1.3226. Min Range: -1.3226

Max Act: 1.2520. Min Act: -0.1688

Data Index: 5262069 (Open Web Text)

Max Activating Token Index: 141

Click toggle to see full text

Truncated

Full Text #5


Text #6

Max Range: 1.3226. Min Range: -1.3226

Max Act: 1.2344. Min Act: -0.1700

Data Index: 6257908 (Open Web Text)

Max Activating Token Index: 73

Click toggle to see full text

Truncated

Full Text #6


Text #7

Max Range: 1.3226. Min Range: -1.3226

Max Act: 1.2520. Min Act: -0.1647

Data Index: 3594308 (Open Web Text)

Max Activating Token Index: 213

Click toggle to see full text

Truncated

Full Text #7


Text #8

Max Range: 1.3226. Min Range: -1.3226

Max Act: 1.2168. Min Act: -0.1700

Data Index: 4911310 (Open Web Text)

Max Activating Token Index: 173

Click toggle to see full text

Truncated

Full Text #8


Text #9

Max Range: 1.3226. Min Range: -1.3226

Max Act: 1.2520. Min Act: -0.1700

Data Index: 395274 (Open Web Text)

Max Activating Token Index: 73

Click toggle to see full text

Truncated

Full Text #9


Text #10

Max Range: 1.3226. Min Range: -1.3226

Max Act: 1.1991. Min Act: -0.1700

Data Index: 5273193 (Open Web Text)

Max Activating Token Index: 147

Click toggle to see full text

Truncated

Full Text #10


Text #11

Max Range: 1.3226. Min Range: -1.3226

Max Act: 1.1991. Min Act: -0.1700

Data Index: 1489362 (Open Web Text)

Max Activating Token Index: 258

Click toggle to see full text

Truncated

Full Text #11


Text #12

Max Range: 1.3226. Min Range: -1.3226

Max Act: 1.1991. Min Act: -0.1700

Data Index: 5741940 (Open Web Text)

Max Activating Token Index: 672

Click toggle to see full text

Truncated

Full Text #12


Text #13

Max Range: 1.3226. Min Range: -1.3226

Max Act: 1.2168. Min Act: -0.1684

Data Index: 1141756 (Open Web Text)

Max Activating Token Index: 725

Click toggle to see full text

Truncated

Full Text #13


Text #14

Max Range: 1.3226. Min Range: -1.3226

Max Act: 1.1991. Min Act: -0.1700

Data Index: 5455581 (Open Web Text)

Max Activating Token Index: 210

Click toggle to see full text

Truncated

Full Text #14


Text #15

Max Range: 1.3226. Min Range: -1.3226

Max Act: 1.1991. Min Act: -0.1699

Data Index: 6855907 (Open Web Text)

Max Activating Token Index: 185

Click toggle to see full text

Truncated

Full Text #15


Text #16

Max Range: 1.3226. Min Range: -1.3226

Max Act: 1.1815. Min Act: -0.1686

Data Index: 3354433 (Open Web Text)

Max Activating Token Index: 308

Click toggle to see full text

Truncated

Full Text #16


Text #17

Max Range: 1.3226. Min Range: -1.3226

Max Act: 1.1640. Min Act: -0.1694

Data Index: 7165972 (Open Web Text)

Max Activating Token Index: 262

Click toggle to see full text

Truncated

Full Text #17


Text #18

Max Range: 1.3226. Min Range: -1.3226

Max Act: 1.1815. Min Act: -0.1664

Data Index: 704751 (Open Web Text)

Max Activating Token Index: 689

Click toggle to see full text

Truncated

Full Text #18


Text #19

Max Range: 1.3226. Min Range: -1.3226

Max Act: 1.1640. Min Act: -0.1700

Data Index: 5403087 (Open Web Text)

Max Activating Token Index: 466

Click toggle to see full text

Truncated

Full Text #19