< Prev | Home | Model | Random | Next >

Model: GPT-2 Small: 12 Layers, 3072 Neurons per Layer

Dataset: Open Web Text

Neuron 2548 in Layer 1

Load this data into an Interactive Neuroscope

See Documentation here

Hooked Transformer Loading: HookedTransformer.from_pretrained('gpt2-small')

Click the toggle to see the full text!



Text #0

Max Range: 1.1338. Min Range: -1.1338

Max Act: 1.1338. Min Act: -0.1673

Data Index: 273253 (Open Web Text)

Max Activating Token Index: 1023

Click toggle to see full text

Truncated

Full Text #0


Text #1

Max Range: 1.1338. Min Range: -1.1338

Max Act: 1.0950. Min Act: -0.1690

Data Index: 6755662 (Open Web Text)

Max Activating Token Index: 1023

Click toggle to see full text

Truncated

Full Text #1


Text #2

Max Range: 1.1338. Min Range: -1.1338

Max Act: 1.0623. Min Act: -0.1691

Data Index: 6534186 (Open Web Text)

Max Activating Token Index: 5

Click toggle to see full text

Truncated

Full Text #2


Text #3

Max Range: 1.1338. Min Range: -1.1338

Max Act: 1.0462. Min Act: -0.1575

Data Index: 3323572 (Open Web Text)

Max Activating Token Index: 1023

Click toggle to see full text

Truncated

Full Text #3


Text #4

Max Range: 1.1338. Min Range: -1.1338

Max Act: 1.0239. Min Act: -0.1510

Data Index: 135471 (Open Web Text)

Max Activating Token Index: 24

Click toggle to see full text

Truncated

Full Text #4


Text #5

Max Range: 1.1338. Min Range: -1.1338

Max Act: 1.0279. Min Act: -0.1583

Data Index: 6595039 (Open Web Text)

Max Activating Token Index: 11

Click toggle to see full text

Truncated

Full Text #5


Text #6

Max Range: 1.1338. Min Range: -1.1338

Max Act: 1.0156. Min Act: -0.1268

Data Index: 3950023 (Open Web Text)

Max Activating Token Index: 85

Click toggle to see full text

Truncated

Full Text #6


Text #7

Max Range: 1.1338. Min Range: -1.1338

Max Act: 1.0156. Min Act: -0.1592

Data Index: 7379261 (Open Web Text)

Max Activating Token Index: 18

Click toggle to see full text

Truncated

Full Text #7


Text #8

Max Range: 1.1338. Min Range: -1.1338

Max Act: 1.0116. Min Act: -0.1545

Data Index: 1203461 (Open Web Text)

Max Activating Token Index: 440

Click toggle to see full text

Truncated

Full Text #8


Text #9

Max Range: 1.1338. Min Range: -1.1338

Max Act: 1.0060. Min Act: -0.1569

Data Index: 7411586 (Open Web Text)

Max Activating Token Index: 1020

Click toggle to see full text

Truncated

Full Text #9


Text #10

Max Range: 1.1338. Min Range: -1.1338

Max Act: 1.0068. Min Act: -0.1610

Data Index: 7513218 (Open Web Text)

Max Activating Token Index: 4

Click toggle to see full text

Truncated

Full Text #10


Text #11

Max Range: 1.1338. Min Range: -1.1338

Max Act: 1.0065. Min Act: -0.1425

Data Index: 5180877 (Open Web Text)

Max Activating Token Index: 41

Click toggle to see full text

Truncated

Full Text #11


Text #12

Max Range: 1.1338. Min Range: -1.1338

Max Act: 0.9995. Min Act: -0.1647

Data Index: 177197 (Open Web Text)

Max Activating Token Index: 6

Click toggle to see full text

Truncated

Full Text #12


Text #13

Max Range: 1.1338. Min Range: -1.1338

Max Act: 1.0024. Min Act: -0.1656

Data Index: 8614158 (Open Web Text)

Max Activating Token Index: 1023

Click toggle to see full text

Truncated

Full Text #13


Text #14

Max Range: 1.1338. Min Range: -1.1338

Max Act: 0.9934. Min Act: -0.1587

Data Index: 7667001 (Open Web Text)

Max Activating Token Index: 1023

Click toggle to see full text

Truncated

Full Text #14


Text #15

Max Range: 1.1338. Min Range: -1.1338

Max Act: 0.9909. Min Act: -0.1564

Data Index: 8659607 (Open Web Text)

Max Activating Token Index: 32

Click toggle to see full text

Truncated

Full Text #15


Text #16

Max Range: 1.1338. Min Range: -1.1338

Max Act: 0.9966. Min Act: -0.1647

Data Index: 6800848 (Open Web Text)

Max Activating Token Index: 8

Click toggle to see full text

Truncated

Full Text #16


Text #17

Max Range: 1.1338. Min Range: -1.1338

Max Act: 0.9953. Min Act: -0.1614

Data Index: 1306088 (Open Web Text)

Max Activating Token Index: 18

Click toggle to see full text

Truncated

Full Text #17


Text #18

Max Range: 1.1338. Min Range: -1.1338

Max Act: 0.9908. Min Act: -0.1510

Data Index: 2062237 (Open Web Text)

Max Activating Token Index: 58

Click toggle to see full text

Truncated

Full Text #18


Text #19

Max Range: 1.1338. Min Range: -1.1338

Max Act: 0.9918. Min Act: -0.1397

Data Index: 824346 (Open Web Text)

Max Activating Token Index: 60

Click toggle to see full text

Truncated

Full Text #19