< Prev | Home | Model | Random | Next >

Model: SoLU Model: 2 Layers, 2944 Neurons per Layer

Dataset: The Pile

Neuron 2940 in Layer 0

Load this data into an Interactive Neuroscope

See Documentation here

Transformer Lens Loading: HookedTransformer.from_pretrained('solu-2l-pile')



Text #0

Max Range: 0.0086. Min Range: -0.0086

Max Act: 0.0086. Min Act: -0.0002

Data Index: 1571438 (The Pile)

Max Activating Token Index: 453

Click toggle to see full text

Truncated

Full Text #0


Text #1

Max Range: 0.0086. Min Range: -0.0086

Max Act: 0.0084. Min Act: -0.0002

Data Index: 1689611 (The Pile)

Max Activating Token Index: 789

Click toggle to see full text

Truncated

Full Text #1


Text #2

Max Range: 0.0086. Min Range: -0.0086

Max Act: 0.0083. Min Act: -0.0002

Data Index: 1080726 (The Pile)

Max Activating Token Index: 75

Click toggle to see full text

Truncated

Full Text #2


Text #3

Max Range: 0.0086. Min Range: -0.0086

Max Act: 0.0082. Min Act: -0.0003

Data Index: 517343 (The Pile)

Max Activating Token Index: 592

Click toggle to see full text

Truncated

Full Text #3


Text #4

Max Range: 0.0086. Min Range: -0.0086

Max Act: 0.0082. Min Act: -0.0002

Data Index: 1370094 (The Pile)

Max Activating Token Index: 488

Click toggle to see full text

Truncated

Full Text #4


Text #5

Max Range: 0.0086. Min Range: -0.0086

Max Act: 0.0082. Min Act: -0.0002

Data Index: 533350 (The Pile)

Max Activating Token Index: 222

Click toggle to see full text

Truncated

Full Text #5


Text #6

Max Range: 0.0086. Min Range: -0.0086

Max Act: 0.0081. Min Act: -0.0002

Data Index: 1526512 (The Pile)

Max Activating Token Index: 148

Click toggle to see full text

Truncated

Full Text #6


Text #7

Max Range: 0.0086. Min Range: -0.0086

Max Act: 0.0083. Min Act: -0.0003

Data Index: 301198 (The Pile)

Max Activating Token Index: 62

Click toggle to see full text

Truncated

Full Text #7


Text #8

Max Range: 0.0086. Min Range: -0.0086

Max Act: 0.0081. Min Act: -0.0003

Data Index: 242113 (The Pile)

Max Activating Token Index: 947

Click toggle to see full text

Truncated

Full Text #8


Text #9

Max Range: 0.0086. Min Range: -0.0086

Max Act: 0.0080. Min Act: -0.0002

Data Index: 187700 (The Pile)

Max Activating Token Index: 393

Click toggle to see full text

Truncated

Full Text #9


Text #10

Max Range: 0.0086. Min Range: -0.0086

Max Act: 0.0080. Min Act: -0.0002

Data Index: 1023265 (The Pile)

Max Activating Token Index: 592

Click toggle to see full text

Truncated

Full Text #10


Text #11

Max Range: 0.0086. Min Range: -0.0086

Max Act: 0.0079. Min Act: -0.0002

Data Index: 1106062 (The Pile)

Max Activating Token Index: 724

Click toggle to see full text

Truncated

Full Text #11


Text #12

Max Range: 0.0086. Min Range: -0.0086

Max Act: 0.0079. Min Act: -0.0002

Data Index: 1441696 (The Pile)

Max Activating Token Index: 52

Click toggle to see full text

Truncated

Full Text #12


Text #13

Max Range: 0.0086. Min Range: -0.0086

Max Act: 0.0079. Min Act: -0.0003

Data Index: 303055 (The Pile)

Max Activating Token Index: 1001

Click toggle to see full text

Truncated

Full Text #13


Text #14

Max Range: 0.0086. Min Range: -0.0086

Max Act: 0.0077. Min Act: -0.0003

Data Index: 1359115 (The Pile)

Max Activating Token Index: 861

Click toggle to see full text

Truncated

Full Text #14


Text #15

Max Range: 0.0086. Min Range: -0.0086

Max Act: 0.0079. Min Act: -0.0003

Data Index: 592186 (The Pile)

Max Activating Token Index: 864

Click toggle to see full text

Truncated

Full Text #15


Text #16

Max Range: 0.0086. Min Range: -0.0086

Max Act: 0.0078. Min Act: -0.0002

Data Index: 1696153 (The Pile)

Max Activating Token Index: 117

Click toggle to see full text

Truncated

Full Text #16


Text #17

Max Range: 0.0086. Min Range: -0.0086

Max Act: 0.0078. Min Act: -0.0002

Data Index: 130598 (The Pile)

Max Activating Token Index: 308

Click toggle to see full text

Truncated

Full Text #17


Text #18

Max Range: 0.0086. Min Range: -0.0086

Max Act: 0.0077. Min Act: -0.0002

Data Index: 507804 (The Pile)

Max Activating Token Index: 885

Click toggle to see full text

Truncated

Full Text #18


Text #19

Max Range: 0.0086. Min Range: -0.0086

Max Act: 0.0077. Min Act: -0.0002

Data Index: 73140 (The Pile)

Max Activating Token Index: 363

Click toggle to see full text

Truncated

Full Text #19