< Prev | Home | Model | Random | Next >

Model: GPT-2 Small: 12 Layers, 3072 Neurons per Layer

Dataset: Open Web Text

Neuron 1454 in Layer 0

Load this data into an Interactive Neuroscope

See Documentation here

Hooked Transformer Loading: HookedTransformer.from_pretrained('gpt2-small')

Click the toggle to see the full text!



Text #0

Max Range: 1.3469. Min Range: -1.3469

Max Act: 1.3405. Min Act: -0.1444

Data Index: 2286469 (Open Web Text)

Max Activating Token Index: 185

Click toggle to see full text

Truncated

Full Text #0


Text #1

Max Range: 1.3469. Min Range: -1.3469

Max Act: 1.3418. Min Act: -0.0533

Data Index: 2474173 (Open Web Text)

Max Activating Token Index: 416

Click toggle to see full text

Truncated

Full Text #1


Text #2

Max Range: 1.3469. Min Range: -1.3469

Max Act: 1.3469. Min Act: -0.1309

Data Index: 7956291 (Open Web Text)

Max Activating Token Index: 341

Click toggle to see full text

Truncated

Full Text #2


Text #3

Max Range: 1.3469. Min Range: -1.3469

Max Act: 1.3260. Min Act: -0.1096

Data Index: 3440568 (Open Web Text)

Max Activating Token Index: 712

Click toggle to see full text

Truncated

Full Text #3


Text #4

Max Range: 1.3469. Min Range: -1.3469

Max Act: 1.3111. Min Act: -0.1590

Data Index: 1374948 (Open Web Text)

Max Activating Token Index: 249

Click toggle to see full text

Truncated

Full Text #4


Text #5

Max Range: 1.3469. Min Range: -1.3469

Max Act: 1.3128. Min Act: -0.1085

Data Index: 2422918 (Open Web Text)

Max Activating Token Index: 275

Click toggle to see full text

Truncated

Full Text #5


Text #6

Max Range: 1.3469. Min Range: -1.3469

Max Act: 1.2999. Min Act: -0.0022

Data Index: 7939051 (Open Web Text)

Max Activating Token Index: 259

Click toggle to see full text

Truncated

Full Text #6


Text #7

Max Range: 1.3469. Min Range: -1.3469

Max Act: 1.2935. Min Act: -0.0839

Data Index: 614552 (Open Web Text)

Max Activating Token Index: 1010

Click toggle to see full text

Truncated

Full Text #7


Text #8

Max Range: 1.3469. Min Range: -1.3469

Max Act: 1.3025. Min Act: -0.0491

Data Index: 4576660 (Open Web Text)

Max Activating Token Index: 848

Click toggle to see full text

Truncated

Full Text #8


Text #9

Max Range: 1.3469. Min Range: -1.3469

Max Act: 1.3073. Min Act: -0.1318

Data Index: 4371021 (Open Web Text)

Max Activating Token Index: 450

Click toggle to see full text

Truncated

Full Text #9


Text #10

Max Range: 1.3469. Min Range: -1.3469

Max Act: 1.3013. Min Act: -0.0141

Data Index: 5123077 (Open Web Text)

Max Activating Token Index: 603

Click toggle to see full text

Truncated

Full Text #10


Text #11

Max Range: 1.3469. Min Range: -1.3469

Max Act: 1.2900. Min Act: -0.0677

Data Index: 8277011 (Open Web Text)

Max Activating Token Index: 597

Click toggle to see full text

Truncated

Full Text #11


Text #12

Max Range: 1.3469. Min Range: -1.3469

Max Act: 1.2954. Min Act: -0.0668

Data Index: 1841638 (Open Web Text)

Max Activating Token Index: 190

Click toggle to see full text

Truncated

Full Text #12


Text #13

Max Range: 1.3469. Min Range: -1.3469

Max Act: 1.2827. Min Act: -0.1180

Data Index: 4996278 (Open Web Text)

Max Activating Token Index: 356

Click toggle to see full text

Truncated

Full Text #13


Text #14

Max Range: 1.3469. Min Range: -1.3469

Max Act: 1.2815. Min Act: -0.1073

Data Index: 1264672 (Open Web Text)

Max Activating Token Index: 274

Click toggle to see full text

Truncated

Full Text #14


Text #15

Max Range: 1.3469. Min Range: -1.3469

Max Act: 1.2780. Min Act: -0.1101

Data Index: 4725559 (Open Web Text)

Max Activating Token Index: 979

Click toggle to see full text

Truncated

Full Text #15


Text #16

Max Range: 1.3469. Min Range: -1.3469

Max Act: 1.2829. Min Act: -0.1520

Data Index: 5708842 (Open Web Text)

Max Activating Token Index: 792

Click toggle to see full text

Truncated

Full Text #16


Text #17

Max Range: 1.3469. Min Range: -1.3469

Max Act: 1.2830. Min Act: -0.1282

Data Index: 292736 (Open Web Text)

Max Activating Token Index: 1004

Click toggle to see full text

Truncated

Full Text #17


Text #18

Max Range: 1.3469. Min Range: -1.3469

Max Act: 1.2896. Min Act: -0.1219

Data Index: 6364386 (Open Web Text)

Max Activating Token Index: 75

Click toggle to see full text

Truncated

Full Text #18


Text #19

Max Range: 1.3469. Min Range: -1.3469

Max Act: 1.2856. Min Act: -0.0958

Data Index: 7648248 (Open Web Text)

Max Activating Token Index: 946

Click toggle to see full text

Truncated

Full Text #19