< Prev | Home | Model | Random | Next >

Model: None

Dataset: The Pile

Neuron 1 in Layer 5

Load this data into an Interactive Neuroscope

See Documentation here

Transformer Lens Loading: HookedTransformer.from_pretrained('pythia-70m')



Text #0

Max Range: 6.7996. Min Range: -6.7996

Max Act: 6.4246. Min Act: -0.1700

Data Index: 6208763 (The Pile)

Max Activating Token Index: 269

Click toggle to see full text

Truncated

7*d**2 + 5*d - 50. Let h(i) = -5*m(i) + 7*v(i). Let k(n) = -6*n - 3. Give k(h(f)). 
-6*f**2 - 3

Full Text #0

<|endoftext|>b) = -10097*b. Let t(d) be the first derivative of d**3/3 - 4012. Determine y(t(c)). 
-10097*c**2 
Let g(i) = -215*i**2. Let s(z) = 2553*z. Determine g(s(t)). 
-1401328935*t**2 
Let f(z) be the third derivative of -z**4/12 + z**3/3 + 2*z**2 - 354*z - 1. Let h(r) = 23*r. Determine f(h(i)). 
-46*i + 2 
Let t(h) = 12*h**2 + 26702. Let v(c) = -5*c**2. Give t(v(d)). 
300*d**4 + 26702 
Let m(p) = -10*p**2 + 7*p - 70. Let v(d) = -7*d**2 + 5*d - 50. Let h(i) = -5*m(i) + 7*v(i). Let k(n) = -6*n - 3. Give k(h(f)). 
-6*f**2 - 3 
Let c(u) = 158*u. Let y(d) = -35493*d. Calculate c(y(h)). 
-5607894*h 
Let t(j) = 14061817*j. Let m(n) = 4*n**2. What is m(t(v))? 
790938789365956*v**2 
Let h(j) = 15*j + 2. Let s(v) = -315*v - 40. Let y(x) = 20*h(x) + s(x). Let p(q) = -3*q - 3. Let b(w) = -4*w - 2. Let t(f) = -3*b(f) + 2*p(f). Give y(t(r)). 
-90*r 
Let t(m) = -3*m. Let u(g) = 11*g + 13. Let z(n) be the first derivative of 3*n**2 + 6*n - 45. Let l(a) = 6*u(a) - 13*z(a). Give l(t(v)). 
36*v 
Let v(a) = -9*a**2 + 10*a - 110. Let f(k) = -3*k**2 + 4*k - 44. Let i(l) = -15*f(l) + 6*v(l). Let m(h) = 0*h**2 + 2*h**2 + 3*h**2. What is m(i(u))? 
405*u**4 
Let m(a) = -244165*a + 244165*a - 11*a**2 - 4. Let k(p) = 3*p. Determine k(m(f)). 
-33*f**2 - 12 
Let o(t) = -3727*t**2. Let m(k) be the second derivative of -k**3/6 + 1703*k. Calculate o(m(c)). 
-3727*c**2 
Let l(b) = 8*b - 15. Let w(a) = 40 + 24 + 46 - 32*a - 110. Calculate w(l(t)). 
-256*t + 480 
Let k(z) = 17828*z**2. Let p(v) = 23*v - 21*v + 5 - 5. Calculate p(k(x)). 
35656*x**2 
Let w(d) = -3*d**2. Let i(k) be the second derivative of 7*k**4/4 - 5*k**3/6 - 310*k + 8. Determine i(w(u)). 
189*u**4 + 15*u**2 
Let z(q) = -21*q**2. Let p(c) = -1321*c + 1321*c + 2 + c**2. Let t(w) = -8*w**2 - 12. Let a(b) = -6*p(b) - t(b). What is a(z(d))? 
882*d**4 
Let y(p) = 311*p**2 - 1107*p + 1. Let g(q) = -2*q. Determine g(y(z)). 
-622*z**2

Text #1

Max Range: 6.7996. Min Range: -6.7996

Max Act: 5.8308. Min Act: -0.1700

Data Index: 1482543 (The Pile)

Max Activating Token Index: 715

Click toggle to see full text

Truncated

Full Text #1


Text #2

Max Range: 6.7996. Min Range: -6.7996

Max Act: 6.4871. Min Act: -0.1700

Data Index: 6627938 (The Pile)

Max Activating Token Index: 837

Click toggle to see full text

Truncated

Full Text #2


Text #3

Max Range: 6.7996. Min Range: -6.7996

Max Act: 6.0808. Min Act: -0.1700

Data Index: 1715076 (The Pile)

Max Activating Token Index: 741

Click toggle to see full text

Truncated

Full Text #3


Text #4

Max Range: 6.7996. Min Range: -6.7996

Max Act: 6.0808. Min Act: -0.1700

Data Index: 4787962 (The Pile)

Max Activating Token Index: 865

Click toggle to see full text

Truncated

Full Text #4


Text #5

Max Range: 6.7996. Min Range: -6.7996

Max Act: 5.6121. Min Act: -0.1700

Data Index: 6475072 (The Pile)

Max Activating Token Index: 574

Click toggle to see full text

Truncated

Full Text #5


Text #6

Max Range: 6.7996. Min Range: -6.7996

Max Act: 5.7058. Min Act: -0.1699

Data Index: 414744 (The Pile)

Max Activating Token Index: 597

Click toggle to see full text

Truncated

Full Text #6


Text #7

Max Range: 6.7996. Min Range: -6.7996

Max Act: 5.7683. Min Act: -0.1700

Data Index: 490407 (The Pile)

Max Activating Token Index: 813

Click toggle to see full text

Truncated

Full Text #7


Text #8

Max Range: 6.7996. Min Range: -6.7996

Max Act: 6.1433. Min Act: -0.1700

Data Index: 446183 (The Pile)

Max Activating Token Index: 431

Click toggle to see full text

Truncated

Full Text #8


Text #9

Max Range: 6.7996. Min Range: -6.7996

Max Act: 5.3308. Min Act: -0.1699

Data Index: 5832099 (The Pile)

Max Activating Token Index: 152

Click toggle to see full text

Truncated

Full Text #9


Text #10

Max Range: 6.7996. Min Range: -6.7996

Max Act: 5.6121. Min Act: -0.1700

Data Index: 7830712 (The Pile)

Max Activating Token Index: 864

Click toggle to see full text

Truncated

Full Text #10


Text #11

Max Range: 6.7996. Min Range: -6.7996

Max Act: 5.8621. Min Act: -0.1700

Data Index: 2628600 (The Pile)

Max Activating Token Index: 339

Click toggle to see full text

Truncated

Full Text #11


Text #12

Max Range: 6.7996. Min Range: -6.7996

Max Act: 6.7996. Min Act: -0.1700

Data Index: 6628223 (The Pile)

Max Activating Token Index: 535

Click toggle to see full text

Truncated

Full Text #12


Text #13

Max Range: 6.7996. Min Range: -6.7996

Max Act: 6.5808. Min Act: -0.1700

Data Index: 2858279 (The Pile)

Max Activating Token Index: 676

Click toggle to see full text

Truncated

Full Text #13


Text #14

Max Range: 6.7996. Min Range: -6.7996

Max Act: 6.1121. Min Act: -0.1700

Data Index: 1763504 (The Pile)

Max Activating Token Index: 644

Click toggle to see full text

Truncated

Full Text #14


Text #15

Max Range: 6.7996. Min Range: -6.7996

Max Act: 6.2058. Min Act: -0.1700

Data Index: 6124598 (The Pile)

Max Activating Token Index: 257

Click toggle to see full text

Truncated

Full Text #15


Text #16

Max Range: 6.7996. Min Range: -6.7996

Max Act: 6.3933. Min Act: -0.1700

Data Index: 10476734 (The Pile)

Max Activating Token Index: 844

Click toggle to see full text

Truncated

Full Text #16


Text #17

Max Range: 6.7996. Min Range: -6.7996

Max Act: 6.3933. Min Act: -0.1700

Data Index: 7218849 (The Pile)

Max Activating Token Index: 694

Click toggle to see full text

Truncated

Full Text #17


Text #18

Max Range: 6.7996. Min Range: -6.7996

Max Act: 5.9558. Min Act: -0.1700

Data Index: 5333286 (The Pile)

Max Activating Token Index: 651

Click toggle to see full text

Truncated

Full Text #18


Text #19

Max Range: 6.7996. Min Range: -6.7996

Max Act: 6.1746. Min Act: -0.1699

Data Index: 1916958 (The Pile)

Max Activating Token Index: 546

Click toggle to see full text

Truncated

Full Text #19