< Prev | Home | Model | Random | Next >

Model: GPT-2 Large: 36 Layers, 5120 Neurons per Layer

Dataset: Open Web Text

Neuron 473 in Layer 12

Load this data into an Interactive Neuroscope

See Documentation here

Transformer Lens Loading: HookedTransformer.from_pretrained('gpt2-large')



Text #0

Max Range: 3.1038. Min Range: -3.1038

Max Act: 3.1038. Min Act: -0.1700

Data Index: 2148773 (Open Web Text)

Max Activating Token Index: 80

Click toggle to see full text

Truncated

Full Text #0


Text #1

Max Range: 3.1038. Min Range: -3.1038

Max Act: 2.8825. Min Act: -0.1700

Data Index: 2213692 (Open Web Text)

Max Activating Token Index: 139

Click toggle to see full text

Truncated

Full Text #1


Text #2

Max Range: 3.1038. Min Range: -3.1038

Max Act: 2.8507. Min Act: -0.1700

Data Index: 2873674 (Open Web Text)

Max Activating Token Index: 945

Click toggle to see full text

Truncated

Full Text #2


Text #3

Max Range: 3.1038. Min Range: -3.1038

Max Act: 2.8348. Min Act: -0.1700

Data Index: 4129342 (Open Web Text)

Max Activating Token Index: 478

Click toggle to see full text

Truncated

Full Text #3


Text #4

Max Range: 3.1038. Min Range: -3.1038

Max Act: 2.8030. Min Act: -0.1700

Data Index: 5036729 (Open Web Text)

Max Activating Token Index: 937

Click toggle to see full text

Truncated

Full Text #4


Text #5

Max Range: 3.1038. Min Range: -3.1038

Max Act: 2.8189. Min Act: -0.1700

Data Index: 7611929 (Open Web Text)

Max Activating Token Index: 513

Click toggle to see full text

Truncated

Full Text #5


Text #6

Max Range: 3.1038. Min Range: -3.1038

Max Act: 2.7552. Min Act: -0.1700

Data Index: 193248 (Open Web Text)

Max Activating Token Index: 916

Click toggle to see full text

Truncated

Full Text #6


Text #7

Max Range: 3.1038. Min Range: -3.1038

Max Act: 2.7712. Min Act: -0.1700

Data Index: 3435599 (Open Web Text)

Max Activating Token Index: 917

Click toggle to see full text

Truncated

Full Text #7


Text #8

Max Range: 3.1038. Min Range: -3.1038

Max Act: 2.7712. Min Act: -0.1700

Data Index: 62736 (Open Web Text)

Max Activating Token Index: 768

Click toggle to see full text

Truncated

Full Text #8


Text #9

Max Range: 3.1038. Min Range: -3.1038

Max Act: 2.7712. Min Act: -0.1700

Data Index: 2571555 (Open Web Text)

Max Activating Token Index: 473

Click toggle to see full text

Truncated

Full Text #9


Text #10

Max Range: 3.1038. Min Range: -3.1038

Max Act: 2.7712. Min Act: -0.1700

Data Index: 7726570 (Open Web Text)

Max Activating Token Index: 9

Click toggle to see full text

Truncated

Full Text #10


Text #11

Max Range: 3.1038. Min Range: -3.1038

Max Act: 2.7712. Min Act: -0.1700

Data Index: 5813463 (Open Web Text)

Max Activating Token Index: 146

Click toggle to see full text

Truncated

Full Text #11


Text #12

Max Range: 3.1038. Min Range: -3.1038

Max Act: 2.7552. Min Act: -0.1700

Data Index: 6398160 (Open Web Text)

Max Activating Token Index: 333

Click toggle to see full text

Truncated

Full Text #12


Text #13

Max Range: 3.1038. Min Range: -3.1038

Max Act: 2.7552. Min Act: -0.1700

Data Index: 7098615 (Open Web Text)

Max Activating Token Index: 357

Click toggle to see full text

Truncated

Full Text #13


Text #14

Max Range: 3.1038. Min Range: -3.1038

Max Act: 2.7392. Min Act: -0.1700

Data Index: 1200682 (Open Web Text)

Max Activating Token Index: 876

Click toggle to see full text

Truncated

Full Text #14


Text #15

Max Range: 3.1038. Min Range: -3.1038

Max Act: 2.7392. Min Act: -0.1700

Data Index: 844833 (Open Web Text)

Max Activating Token Index: 639

Click toggle to see full text

Truncated

Full Text #15


Text #16

Max Range: 3.1038. Min Range: -3.1038

Max Act: 2.7233. Min Act: -0.1700

Data Index: 4379386 (Open Web Text)

Max Activating Token Index: 705

Click toggle to see full text

Truncated

Full Text #16


Text #17

Max Range: 3.1038. Min Range: -3.1038

Max Act: 2.7233. Min Act: -0.1700

Data Index: 3834476 (Open Web Text)

Max Activating Token Index: 680

Click toggle to see full text

Truncated

Full Text #17


Text #18

Max Range: 3.1038. Min Range: -3.1038

Max Act: 2.6913. Min Act: -0.1700

Data Index: 783514 (Open Web Text)

Max Activating Token Index: 193

Click toggle to see full text

Truncated

Full Text #18


Text #19

Max Range: 3.1038. Min Range: -3.1038

Max Act: 2.7073. Min Act: -0.1700

Data Index: 189612 (Open Web Text)

Max Activating Token Index: 583

Click toggle to see full text

Truncated

Full Text #19