Model: GPT-2 Small: 12 Layers, 3072 Neurons per Layer
Dataset: Open Web Text
Neuron 2769 in Layer 7
Transformer Lens Loading: HookedTransformer.from_pretrained('gpt2-small')
Text #0
Max Range: 3.9541. Min Range: -3.9541
Max Act: 3.9541. Min Act: -0.1700
Data Index: 3722681 (Open Web Text)
Max Activating Token Index: 455
Click toggle to see full textTruncated
Full Text #0
Text #1
Max Range: 3.9541. Min Range: -3.9541
Max Act: 3.8338. Min Act: -0.1700
Data Index: 6991902 (Open Web Text)
Max Activating Token Index: 705
Click toggle to see full textTruncated
Full Text #1
Text #2
Max Range: 3.9541. Min Range: -3.9541
Max Act: 3.8287. Min Act: -0.1700
Data Index: 1764612 (Open Web Text)
Max Activating Token Index: 428
Click toggle to see full textTruncated
Full Text #2
Text #3
Max Range: 3.9541. Min Range: -3.9541
Max Act: 3.8232. Min Act: -0.1700
Data Index: 5386681 (Open Web Text)
Max Activating Token Index: 219
Click toggle to see full textTruncated
Full Text #3
Text #4
Max Range: 3.9541. Min Range: -3.9541
Max Act: 3.7905. Min Act: -0.1700
Data Index: 3405023 (Open Web Text)
Max Activating Token Index: 901
Click toggle to see full textTruncated
Full Text #4
Text #5
Max Range: 3.9541. Min Range: -3.9541
Max Act: 3.8098. Min Act: -0.1700
Data Index: 5473384 (Open Web Text)
Max Activating Token Index: 932
Click toggle to see full textTruncated
Full Text #5
Text #6
Max Range: 3.9541. Min Range: -3.9541
Max Act: 3.8052. Min Act: -0.1700
Data Index: 5435612 (Open Web Text)
Max Activating Token Index: 684
Click toggle to see full textTruncated
Full Text #6
Text #7
Max Range: 3.9541. Min Range: -3.9541
Max Act: 3.7971. Min Act: -0.1700
Data Index: 6998699 (Open Web Text)
Max Activating Token Index: 892
Click toggle to see full textTruncated
Full Text #7
Text #8
Max Range: 3.9541. Min Range: -3.9541
Max Act: 3.7725. Min Act: -0.1700
Data Index: 3814634 (Open Web Text)
Max Activating Token Index: 703
Click toggle to see full textTruncated
Full Text #8
Text #9
Max Range: 3.9541. Min Range: -3.9541
Max Act: 3.7518. Min Act: -0.1700
Data Index: 2062712 (Open Web Text)
Max Activating Token Index: 1023
Click toggle to see full textTruncated
Full Text #9
Text #10
Max Range: 3.9541. Min Range: -3.9541
Max Act: 3.7670. Min Act: -0.1700
Data Index: 3297629 (Open Web Text)
Max Activating Token Index: 1003
Click toggle to see full textTruncated
Full Text #10
Text #11
Max Range: 3.9541. Min Range: -3.9541
Max Act: 3.7549. Min Act: -0.1700
Data Index: 4287776 (Open Web Text)
Max Activating Token Index: 731
Click toggle to see full textTruncated
Full Text #11
Text #12
Max Range: 3.9541. Min Range: -3.9541
Max Act: 3.7530. Min Act: -0.1700
Data Index: 2736552 (Open Web Text)
Max Activating Token Index: 669
Click toggle to see full textTruncated
Full Text #12
Text #13
Max Range: 3.9541. Min Range: -3.9541
Max Act: 3.7759. Min Act: -0.1700
Data Index: 4650144 (Open Web Text)
Max Activating Token Index: 454
Click toggle to see full textTruncated
Full Text #13
Text #14
Max Range: 3.9541. Min Range: -3.9541
Max Act: 3.7179. Min Act: -0.1700
Data Index: 7128624 (Open Web Text)
Max Activating Token Index: 853
Click toggle to see full textTruncated
Full Text #14
Text #15
Max Range: 3.9541. Min Range: -3.9541
Max Act: 3.7367. Min Act: -0.1700
Data Index: 2869567 (Open Web Text)
Max Activating Token Index: 190
Click toggle to see full textTruncated
Full Text #15
Text #16
Max Range: 3.9541. Min Range: -3.9541
Max Act: 3.6988. Min Act: -0.1700
Data Index: 3764467 (Open Web Text)
Max Activating Token Index: 83
Click toggle to see full textTruncated
Full Text #16
Text #17
Max Range: 3.9541. Min Range: -3.9541
Max Act: 3.6924. Min Act: -0.1700
Data Index: 5343613 (Open Web Text)
Max Activating Token Index: 500
Click toggle to see full textTruncated
Full Text #17
Text #18
Max Range: 3.9541. Min Range: -3.9541
Max Act: 3.7111. Min Act: -0.1700
Data Index: 6310293 (Open Web Text)
Max Activating Token Index: 466
Click toggle to see full textTruncated
Full Text #18
Text #19
Max Range: 3.9541. Min Range: -3.9541
Max Act: 3.7128. Min Act: -0.1700
Data Index: 6485145 (Open Web Text)
Max Activating Token Index: 373
Click toggle to see full textTruncated
Full Text #19