< Prev | Home | Model | Random | Next >

Model: GPT-2 Large: 36 Layers, 5120 Neurons per Layer

Dataset: Open Web Text

Neuron 175 in Layer 3

Load this data into an Interactive Neuroscope

See Documentation here

Transformer Lens Loading: HookedTransformer.from_pretrained('gpt2-large')



Text #0

Max Range: 2.2367. Min Range: -2.2367

Max Act: 2.2367. Min Act: -0.0836

Data Index: 7211868 (Open Web Text)

Max Activating Token Index: 1

Click toggle to see full text

Truncated

Full Text #0


Text #1

Max Range: 2.2367. Min Range: -2.2367

Max Act: 2.2367. Min Act: -0.0353

Data Index: 765891 (Open Web Text)

Max Activating Token Index: 1

Click toggle to see full text

Truncated

Full Text #1


Text #2

Max Range: 2.2367. Min Range: -2.2367

Max Act: 2.2367. Min Act: -0.0196

Data Index: 7634953 (Open Web Text)

Max Activating Token Index: 1

Click toggle to see full text

Truncated

Full Text #2


Text #3

Max Range: 2.2367. Min Range: -2.2367

Max Act: 2.2367. Min Act: -0.0805

Data Index: 7573129 (Open Web Text)

Max Activating Token Index: 1

Click toggle to see full text

Truncated

Full Text #3


Text #4

Max Range: 2.2367. Min Range: -2.2367

Max Act: 2.2367. Min Act: -0.0989

Data Index: 1828619 (Open Web Text)

Max Activating Token Index: 1

Click toggle to see full text

Truncated

Full Text #4


Text #5

Max Range: 2.2367. Min Range: -2.2367

Max Act: 2.2367. Min Act: -0.0223

Data Index: 7241937 (Open Web Text)

Max Activating Token Index: 1

Click toggle to see full text

Truncated

Full Text #5


Text #6

Max Range: 2.2367. Min Range: -2.2367

Max Act: 2.2367. Min Act: -0.0587

Data Index: 3740008 (Open Web Text)

Max Activating Token Index: 1

Click toggle to see full text

Truncated

Full Text #6


Text #7

Max Range: 2.2367. Min Range: -2.2367

Max Act: 2.2367. Min Act: -0.0855

Data Index: 3587245 (Open Web Text)

Max Activating Token Index: 1

Click toggle to see full text

Truncated

Full Text #7


Text #8

Max Range: 2.2367. Min Range: -2.2367

Max Act: 2.2367. Min Act: -0.1120

Data Index: 6863078 (Open Web Text)

Max Activating Token Index: 1

Click toggle to see full text

Truncated

Full Text #8


Text #9

Max Range: 2.2367. Min Range: -2.2367

Max Act: 2.2367. Min Act: -0.0292

Data Index: 6719913 (Open Web Text)

Max Activating Token Index: 1

Click toggle to see full text

Truncated

Full Text #9


Text #10

Max Range: 2.2367. Min Range: -2.2367

Max Act: 2.2367. Min Act: -0.0999

Data Index: 57377 (Open Web Text)

Max Activating Token Index: 1

Click toggle to see full text

Truncated

Full Text #10


Text #11

Max Range: 2.2367. Min Range: -2.2367

Max Act: 2.2367. Min Act: -0.0275

Data Index: 2865947 (Open Web Text)

Max Activating Token Index: 1

Click toggle to see full text

Truncated

Full Text #11


Text #12

Max Range: 2.2367. Min Range: -2.2367

Max Act: 2.2367. Min Act: -0.0766

Data Index: 209592 (Open Web Text)

Max Activating Token Index: 1

Click toggle to see full text

Truncated

Full Text #12


Text #13

Max Range: 2.2367. Min Range: -2.2367

Max Act: 2.0533. Min Act: -0.0271

Data Index: 7132586 (Open Web Text)

Max Activating Token Index: 1

Click toggle to see full text

Truncated

Full Text #13


Text #14

Max Range: 2.2367. Min Range: -2.2367

Max Act: 2.0533. Min Act: -0.0342

Data Index: 5978520 (Open Web Text)

Max Activating Token Index: 1

Click toggle to see full text

Truncated

Full Text #14


Text #15

Max Range: 2.2367. Min Range: -2.2367

Max Act: 2.0533. Min Act: -0.0200

Data Index: 6783447 (Open Web Text)

Max Activating Token Index: 1

Click toggle to see full text

Truncated

Full Text #15


Text #16

Max Range: 2.2367. Min Range: -2.2367

Max Act: 2.0533. Min Act: -0.0235

Data Index: 794459 (Open Web Text)

Max Activating Token Index: 1

Click toggle to see full text

Truncated

Full Text #16


Text #17

Max Range: 2.2367. Min Range: -2.2367

Max Act: 2.0533. Min Act: -0.0949

Data Index: 6481064 (Open Web Text)

Max Activating Token Index: 1

Click toggle to see full text

Truncated

Full Text #17


Text #18

Max Range: 2.2367. Min Range: -2.2367

Max Act: 2.0533. Min Act: -0.0956

Data Index: 3747493 (Open Web Text)

Max Activating Token Index: 1

Click toggle to see full text

Truncated

Full Text #18


Text #19

Max Range: 2.2367. Min Range: -2.2367

Max Act: 2.0533. Min Act: -0.0331

Data Index: 4113179 (Open Web Text)

Max Activating Token Index: 1

Click toggle to see full text

Truncated

Full Text #19