Model: GPT-2 Xl: 48 Layers, 6400 Neurons per Layer
Dataset: Open Web Text
Neuron 6285 in Layer 43
Transformer Lens Loading: HookedTransformer.from_pretrained('gpt2-xl')
Text #0
Max Range: 7.5268. Min Range: -7.5268
Max Act: 7.5268. Min Act: -0.1700
Data Index: 6533351 (Open Web Text)
Max Activating Token Index: 403
Click toggle to see full text
Truncated
Full Text #0
Text #1
Max Range: 7.5268. Min Range: -7.5268
Max Act: 7.4330. Min Act: -0.1700
Data Index: 6347098 (Open Web Text)
Max Activating Token Index: 725
Click toggle to see full text
Truncated
Full Text #1
Text #2
Max Range: 7.5268. Min Range: -7.5268
Max Act: 7.4643. Min Act: -0.1700
Data Index: 506271 (Open Web Text)
Max Activating Token Index: 219
Click toggle to see full text
Truncated
Full Text #2
Text #3
Max Range: 7.5268. Min Range: -7.5268
Max Act: 7.3705. Min Act: -0.1700
Data Index: 572335 (Open Web Text)
Max Activating Token Index: 365
Click toggle to see full text
Truncated
Full Text #3
Text #4
Max Range: 7.5268. Min Range: -7.5268
Max Act: 7.0893. Min Act: -0.1700
Data Index: 6532731 (Open Web Text)
Max Activating Token Index: 765
Click toggle to see full text
Truncated
Full Text #4
Text #5
Max Range: 7.5268. Min Range: -7.5268
Max Act: 6.9643. Min Act: -0.1700
Data Index: 1222494 (Open Web Text)
Max Activating Token Index: 604
Click toggle to see full text
Truncated
Full Text #5
Text #6
Max Range: 7.5268. Min Range: -7.5268
Max Act: 6.9018. Min Act: -0.1700
Data Index: 3284692 (Open Web Text)
Max Activating Token Index: 1018
Click toggle to see full text
Truncated
Full Text #6
Text #7
Max Range: 7.5268. Min Range: -7.5268
Max Act: 6.9330. Min Act: -0.1700
Data Index: 7191413 (Open Web Text)
Max Activating Token Index: 215
Click toggle to see full text
Truncated
Full Text #7
Text #8
Max Range: 7.5268. Min Range: -7.5268
Max Act: 6.9018. Min Act: -0.1700
Data Index: 573985 (Open Web Text)
Max Activating Token Index: 659
Click toggle to see full text
Truncated
Full Text #8
Text #9
Max Range: 7.5268. Min Range: -7.5268
Max Act: 6.8705. Min Act: -0.1700
Data Index: 1760541 (Open Web Text)
Max Activating Token Index: 331
Click toggle to see full text
Truncated
Full Text #9
Text #10
Max Range: 7.5268. Min Range: -7.5268
Max Act: 6.8705. Min Act: -0.1700
Data Index: 550723 (Open Web Text)
Max Activating Token Index: 784
Click toggle to see full text
Truncated
Full Text #10
Text #11
Max Range: 7.5268. Min Range: -7.5268
Max Act: 6.8393. Min Act: -0.1700
Data Index: 1766436 (Open Web Text)
Max Activating Token Index: 208
Click toggle to see full text
Truncated
Full Text #11
Text #12
Max Range: 7.5268. Min Range: -7.5268
Max Act: 6.8705. Min Act: -0.1700
Data Index: 2196962 (Open Web Text)
Max Activating Token Index: 474
Click toggle to see full text
Truncated
Full Text #12
Text #13
Max Range: 7.5268. Min Range: -7.5268
Max Act: 6.8393. Min Act: -0.1700
Data Index: 594031 (Open Web Text)
Max Activating Token Index: 946
Click toggle to see full text
Truncated
Full Text #13
Text #14
Max Range: 7.5268. Min Range: -7.5268
Max Act: 6.8080. Min Act: -0.1700
Data Index: 6686084 (Open Web Text)
Max Activating Token Index: 852
Click toggle to see full text
Truncated
Full Text #14
Text #15
Max Range: 7.5268. Min Range: -7.5268
Max Act: 6.7768. Min Act: -0.1700
Data Index: 6508489 (Open Web Text)
Max Activating Token Index: 739
Click toggle to see full text
Truncated
Full Text #15
Text #16
Max Range: 7.5268. Min Range: -7.5268
Max Act: 6.8080. Min Act: -0.1700
Data Index: 7556042 (Open Web Text)
Max Activating Token Index: 382
Click toggle to see full text
Truncated
Full Text #16
Text #17
Max Range: 7.5268. Min Range: -7.5268
Max Act: 6.8080. Min Act: -0.1700
Data Index: 4341426 (Open Web Text)
Max Activating Token Index: 822
Click toggle to see full text
Truncated
Full Text #17
Text #18
Max Range: 7.5268. Min Range: -7.5268
Max Act: 6.7768. Min Act: -0.1700
Data Index: 6133962 (Open Web Text)
Max Activating Token Index: 955
Click toggle to see full text
Truncated
Full Text #18
Text #19
Max Range: 7.5268. Min Range: -7.5268
Max Act: 6.7455. Min Act: -0.1700
Data Index: 8049117 (Open Web Text)
Max Activating Token Index: 229
Click toggle to see full text
Truncated
Full Text #19