Model: GPT-2 Small: 12 Layers, 3072 Neurons per Layer
Dataset: Open Web Text
Neuron 933 in Layer 0
Hooked Transformer Loading: HookedTransformer.from_pretrained('gpt2-small')
Click the toggle to see the full text!
Text #0
Max Range: 1.2960. Min Range: -1.2960
Max Act: 1.2960. Min Act: -0.0584
Data Index: 8614476 (Open Web Text)
Max Activating Token Index: 623
Click toggle to see full textTruncated
Full Text #0
Text #1
Max Range: 1.2960. Min Range: -1.2960
Max Act: 1.2936. Min Act: -0.1158
Data Index: 6248008 (Open Web Text)
Max Activating Token Index: 632
Click toggle to see full textTruncated
Full Text #1
Text #2
Max Range: 1.2960. Min Range: -1.2960
Max Act: 1.2882. Min Act: -0.1425
Data Index: 6461958 (Open Web Text)
Max Activating Token Index: 715
Click toggle to see full textTruncated
Full Text #2
Text #3
Max Range: 1.2960. Min Range: -1.2960
Max Act: 1.2761. Min Act: -0.1302
Data Index: 4700720 (Open Web Text)
Max Activating Token Index: 672
Click toggle to see full textTruncated
Full Text #3
Text #4
Max Range: 1.2960. Min Range: -1.2960
Max Act: 1.2807. Min Act: -0.0706
Data Index: 303717 (Open Web Text)
Max Activating Token Index: 679
Click toggle to see full textTruncated
Full Text #4
Text #5
Max Range: 1.2960. Min Range: -1.2960
Max Act: 1.2811. Min Act: -0.0996
Data Index: 4765503 (Open Web Text)
Max Activating Token Index: 616
Click toggle to see full textTruncated
Full Text #5
Text #6
Max Range: 1.2960. Min Range: -1.2960
Max Act: 1.2760. Min Act: -0.0716
Data Index: 8143085 (Open Web Text)
Max Activating Token Index: 602
Click toggle to see full textTruncated
Full Text #6
Text #7
Max Range: 1.2960. Min Range: -1.2960
Max Act: 1.2756. Min Act: -0.1061
Data Index: 1175971 (Open Web Text)
Max Activating Token Index: 690
Click toggle to see full textTruncated
Full Text #7
Text #8
Max Range: 1.2960. Min Range: -1.2960
Max Act: 1.2741. Min Act: -0.1479
Data Index: 77007 (Open Web Text)
Max Activating Token Index: 687
Click toggle to see full textTruncated
Full Text #8
Text #9
Max Range: 1.2960. Min Range: -1.2960
Max Act: 1.2770. Min Act: -0.1171
Data Index: 5591269 (Open Web Text)
Max Activating Token Index: 638
Click toggle to see full textTruncated
Full Text #9
Text #10
Max Range: 1.2960. Min Range: -1.2960
Max Act: 1.2802. Min Act: -0.1302
Data Index: 4035105 (Open Web Text)
Max Activating Token Index: 663
Click toggle to see full textTruncated
Full Text #10
Text #11
Max Range: 1.2960. Min Range: -1.2960
Max Act: 1.2750. Min Act: -0.1240
Data Index: 828628 (Open Web Text)
Max Activating Token Index: 634
Click toggle to see full textTruncated
Full Text #11
Text #12
Max Range: 1.2960. Min Range: -1.2960
Max Act: 1.2695. Min Act: -0.0804
Data Index: 7441662 (Open Web Text)
Max Activating Token Index: 658
Click toggle to see full textTruncated
Full Text #12
Text #13
Max Range: 1.2960. Min Range: -1.2960
Max Act: 1.2761. Min Act: -0.0500
Data Index: 4329430 (Open Web Text)
Max Activating Token Index: 675
Click toggle to see full textTruncated
Full Text #13
Text #14
Max Range: 1.2960. Min Range: -1.2960
Max Act: 1.2719. Min Act: -0.1187
Data Index: 6609382 (Open Web Text)
Max Activating Token Index: 414
Click toggle to see full textTruncated
Full Text #14
Text #15
Max Range: 1.2960. Min Range: -1.2960
Max Act: 1.2617. Min Act: -0.0477
Data Index: 1868727 (Open Web Text)
Max Activating Token Index: 674
Click toggle to see full textTruncated
Full Text #15
Text #16
Max Range: 1.2960. Min Range: -1.2960
Max Act: 1.2652. Min Act: -0.0957
Data Index: 6268095 (Open Web Text)
Max Activating Token Index: 624
Click toggle to see full textTruncated
Full Text #16
Text #17
Max Range: 1.2960. Min Range: -1.2960
Max Act: 1.2589. Min Act: -0.1302
Data Index: 3421744 (Open Web Text)
Max Activating Token Index: 667
Click toggle to see full textTruncated
Full Text #17
Text #18
Max Range: 1.2960. Min Range: -1.2960
Max Act: 1.2592. Min Act: -0.1536
Data Index: 3021953 (Open Web Text)
Max Activating Token Index: 620
Click toggle to see full textTruncated
Full Text #18
Text #19
Max Range: 1.2960. Min Range: -1.2960
Max Act: 1.2659. Min Act: -0.1327
Data Index: 6691223 (Open Web Text)
Max Activating Token Index: 630
Click toggle to see full textTruncated
Full Text #19