< Prev | Home | Model | Random | Next >

Model: GPT-2 Small: 12 Layers, 3072 Neurons per Layer

Dataset: Open Web Text

Neuron 515 in Layer 2

Load this data into an Interactive Neuroscope

See Documentation here

Hooked Transformer Loading: HookedTransformer.from_pretrained('gpt2-small')

Click the toggle to see the full text!



Text #0

Max Range: 3.6610. Min Range: -3.6610

Max Act: 3.6610. Min Act: -0.1700

Data Index: 282801 (Open Web Text)

Max Activating Token Index: 436

Click toggle to see full text

Truncated

Full Text #0


Text #1

Max Range: 3.6610. Min Range: -3.6610

Max Act: 3.5160. Min Act: -0.1700

Data Index: 2116530 (Open Web Text)

Max Activating Token Index: 648

Click toggle to see full text

Truncated

Full Text #1


Text #2

Max Range: 3.6610. Min Range: -3.6610

Max Act: 3.4715. Min Act: -0.1700

Data Index: 2192087 (Open Web Text)

Max Activating Token Index: 337

Click toggle to see full text

Truncated

Full Text #2


Text #3

Max Range: 3.6610. Min Range: -3.6610

Max Act: 3.4490. Min Act: -0.1700

Data Index: 4059309 (Open Web Text)

Max Activating Token Index: 794

Click toggle to see full text

Truncated

Full Text #3


Text #4

Max Range: 3.6610. Min Range: -3.6610

Max Act: 3.4560. Min Act: -0.1700

Data Index: 8046645 (Open Web Text)

Max Activating Token Index: 10

Click toggle to see full text

Truncated

Full Text #4


Text #5

Max Range: 3.6610. Min Range: -3.6610

Max Act: 3.4549. Min Act: -0.1700

Data Index: 889978 (Open Web Text)

Max Activating Token Index: 618

Click toggle to see full text

Truncated

Full Text #5


Text #6

Max Range: 3.6610. Min Range: -3.6610

Max Act: 3.4584. Min Act: -0.1700

Data Index: 4941934 (Open Web Text)

Max Activating Token Index: 966

Click toggle to see full text

Truncated

Full Text #6


Text #7

Max Range: 3.6610. Min Range: -3.6610

Max Act: 3.4538. Min Act: -0.1700

Data Index: 8124430 (Open Web Text)

Max Activating Token Index: 450

Click toggle to see full text

Truncated

Full Text #7


Text #8

Max Range: 3.6610. Min Range: -3.6610

Max Act: 3.4495. Min Act: -0.1700

Data Index: 3593753 (Open Web Text)

Max Activating Token Index: 264

Click toggle to see full text

Truncated

Full Text #8


Text #9

Max Range: 3.6610. Min Range: -3.6610

Max Act: 3.4478. Min Act: -0.1700

Data Index: 6175986 (Open Web Text)

Max Activating Token Index: 878

Click toggle to see full text

Truncated

Full Text #9


Text #10

Max Range: 3.6610. Min Range: -3.6610

Max Act: 3.4210. Min Act: -0.1700

Data Index: 6322742 (Open Web Text)

Max Activating Token Index: 995

Click toggle to see full text

Truncated

Full Text #10


Text #11

Max Range: 3.6610. Min Range: -3.6610

Max Act: 3.4225. Min Act: -0.1700

Data Index: 2269436 (Open Web Text)

Max Activating Token Index: 316

Click toggle to see full text

Truncated

Full Text #11


Text #12

Max Range: 3.6610. Min Range: -3.6610

Max Act: 3.4388. Min Act: -0.1700

Data Index: 6496893 (Open Web Text)

Max Activating Token Index: 809

Click toggle to see full text

Truncated

Full Text #12


Text #13

Max Range: 3.6610. Min Range: -3.6610

Max Act: 3.4148. Min Act: -0.1700

Data Index: 1444191 (Open Web Text)

Max Activating Token Index: 39

Click toggle to see full text

Truncated

Full Text #13


Text #14

Max Range: 3.6610. Min Range: -3.6610

Max Act: 3.4038. Min Act: -0.1700

Data Index: 4669259 (Open Web Text)

Max Activating Token Index: 152

Click toggle to see full text

Truncated

Full Text #14


Text #15

Max Range: 3.6610. Min Range: -3.6610

Max Act: 3.4112. Min Act: -0.1700

Data Index: 5071365 (Open Web Text)

Max Activating Token Index: 703

Click toggle to see full text

Truncated

Full Text #15


Text #16

Max Range: 3.6610. Min Range: -3.6610

Max Act: 3.4266. Min Act: -0.1700

Data Index: 1203607 (Open Web Text)

Max Activating Token Index: 409

Click toggle to see full text

Truncated

Full Text #16


Text #17

Max Range: 3.6610. Min Range: -3.6610

Max Act: 3.4105. Min Act: -0.1700

Data Index: 5365250 (Open Web Text)

Max Activating Token Index: 962

Click toggle to see full text

Truncated

Full Text #17


Text #18

Max Range: 3.6610. Min Range: -3.6610

Max Act: 3.4156. Min Act: -0.1700

Data Index: 5543700 (Open Web Text)

Max Activating Token Index: 254

Click toggle to see full text

Truncated

Full Text #18


Text #19

Max Range: 3.6610. Min Range: -3.6610

Max Act: 3.4082. Min Act: -0.1700

Data Index: 3207959 (Open Web Text)

Max Activating Token Index: 865

Click toggle to see full text

Truncated

Full Text #19