-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathaxrp 2023-07-27.txt
543 lines (272 loc) · 109 KB
/
axrp 2023-07-27.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
daniel filan hello everybody in this episode i'll be speaking with jan leike after working for four years at deepmind on reinforcement learning from human feedback and recursive reward modelling in early 2021 jan joined openai where he now co-leads the recently-announced superalignment team for links to what we're discussing you can check the description of this episode and you can read the transcript at axrpnet welcome to axrp
jan leike thanks a lot for having me
daniel filan yeah not at all so first of all i guess we're going to be talking about this announcement of the superalignment team for people who somehow haven't heard of that or haven't read that blog post can you recap what it is and what it's going to be doing
jan leike yeah i'm excited to so basically we want to set us an ambitious goal of solving alignment of superintelligence within the next four years so by mid-2027 and ilya sutskever the co-founder and chief scientist of openai is joining the team he's co-leading it with me and openai is committing 20% of the compute secured so far to this effort or to the effort of aligning superintelligence and so we're staffing up the effort a lot we are hiring a lot of people in particular we're interested in hiring machine learning researchers and engineers who haven't really worked that much on alignment before because we think there's a lot of scope for them to contribute and have a really big impact yeah we have a general overall plan of how we want to approach the problem that involves training a roughly human-level alignment researcher that can work automatically and then ask that automated alignment researcher to figure out how to align superintelligence
daniel filan okay
jan leike and so one of the key pieces for us to do would be to figure out how to align this automated alignment researcher
daniel filan okay yeah i'd actually like to get into this i think in the blog post you used the phrase human-level automated alignment researcher right what should i imagine here what is that
jan leike yeah so basically we want to offload as many of the tasks that we do when we're doing alignment work to an automated system [as possible] so typically when you're using llms or if you're building an ai system in general the skill profiles they have isn't exactly what a human would do right they would be vastly better at some things like language models are now on translation or knowing facts and so on and then the ai system would be significantly worse at some other tasks like language models are right now with for example arithmetic and so the question then becomes what are the kind of tasks that we can offload to the ai systems in which order and as we are doing this you'd expect humans would focus more and more on the tasks that we are not offloading and so as we go into that process ai systems are doing a larger and larger chunk of the overall work and human researchers will basically thus be more and more effective at actually making progress
daniel filan okay so should i imagine something like instead of you replace the first openai alignment team employee and then the second one [we] should imagine you replace this type of task that everyone is doing and then this type of task that everyone is doing roughly that kind of thing
jan leike yeah that's how i picture it going and then i think in order to actually get a lot of work out of the system right you would want to have 99 or 999% of the tasks being automated because then you have effectively 10x 100x 1000x as much research output
daniel filan okay what kinds of tasks are you imagining it doing
jan leike so broadly i would throw them into two different buckets one bucket is the tasks that look more like traditional ml engineering research that you would do if you were just trying to make ai systems be more capable and then the other bucket is all the other things that we have to do on alignment and so in the first bucket this is stuff like you're implementing ml experiments running them and looking at the results and the second bucket it's more like how do you for example figure out what experiments you should run to improve scalable oversight or how do you make progress in interpretability right these are really big high level questions
but there's also just a lot more detailed questions [eg say] you have a given point that you are in research - let's say you have just written a paper and you're like okay what do we need to do next if we continued down this route and so i expect that basically ml in general will get really good at the first bucket of just designing running experiments automatically and our job of differentially accelerating alignment progress would be to figure out how to automate the second bucket
daniel filan okay and so you're conceiving the second bucket as the full stack from coming up with research directions to coming up with ideas of what things might work to all the way down to what script do i run right now
jan leike yeah i mean you could ask me if i think that alignment research is so similar to machine learning research how much is there really in the second bucket but i think there's actually a lot in there and it's highly leveraged because alignment as a problem is still so vague and confusing and i think in general there's a lot of disagreement among experts around the most promising directions or what we should do next and so the more you can accelerate what we do there it will actually have really large impact
daniel filan okay cool
jan leike this is basically the same pitch that you would give for a researcher to join the field right
daniel filan yeah
jan leike we're still trying to figure out the basics it's a wide open research problem we don't know how to align superintelligence or even systems that are significantly smarter than humans
daniel filan it makes sense yeah it's like we want to recruit ai just like we want to recruit more people i guess
jan leike that's right
daniel filan all right
jan leike but there's something really beautiful about recruiting ai which is it scales so much better and faster than humans do because all you need to do is buy more gpus and then you have more ai
daniel filan makes sense so one question i had is when you said a human-level alignment researcher it seems often in ai most things aren't exactly human-level at anything right so you mentioned chat models i think they're superhuman just in terms of breadth of knowledge right i think it would be hard for anyone to know as many facts as gpt-4 does but [they're] subhuman at arithmetic at least if the human's allowed to have pen and paper you know so how important is the ‘human-level' qualifier on these lists of tasks if it's really superhuman at some of them is that a problem for you or is that just so much the better
jan leike yeah i think the question is really how risky is it to run that system on the task of alignment research because if it knows a lot of facts that isn't particularly scary but what we really need to figure out is if we let the system take over some amount or ultimately almost all of our alignment research will it lie to us will it try to deceive us will it try to take the opportunity to take over because now it's doing so much stuff that we can't look at [it all] ourselves and so the question is the kind of skillset that you would need to do this how does it compare to the kind of skillset that we would need to get a lot of assistance in alignment research
and if you zoom into that question what are actually the things that we would be worried about this is like how good is it is the model spinning really coherent lies or being deceptive or pretending to do something or believe one thing and then actually wanting another i think another really key capability here is self-exfiltration so how good would the model be at breaking the security precautions and accessing its own weights and trying to copy it somewhere else on the internet or persuading an engineer with access to the weights to download them and send them somewhere and so we can specifically measure how good the models are at that and then we can compare it to measuring and how good is it at actually helping us with alignment research
daniel filan okay and so roughly the idea is you want the models to not be too good at these scary tasks
jan leike that's right
daniel filan yeah so this actually relates to a critique of this line of research that basically says okay if i want a human-level automated alignment researcher it needs to be pretty smart it needs to be creative right it needs to think of things that we haven't thought of yet it needs to be able to plan towards a goal - i want to get this so i've got to do these non-obvious things in the way i've got to learn things about the world… and it's also got to be really good at thinking about misalignment right in order to solve misalignment problems and so one might think oh that combination of things that's inherently scary or dangerous and i guess the question almost is then if the task is you're building something you're aligning this automated alignment researcher do you even have any problems left for it to solve
jan leike yeah i think ultimately this is an empirical question it's really difficult to know in which order which skills get unlocked when you scale up the models there's a lot more work aimed at predicting emerging capabilities now and i'm really excited about that i think it'll give us some chance of actually predicting what the next pre-trained model will be like but i think we can make some high-level arguments so for example one thing that is pretty clear is once you have the model [so] good you can hand a lot of alignment research off to [it] wouldn't it then also be able to just improve its own capabilities or it can do a bunch of ml research so it can run experiments on improving compute efficiency or something and then you could use that and pre-train a much more capable model shortly thereafter
and i think the story sounds on the surface appealing but i think in practice it will be actually a lot more complicated because you're not doing big pre-trained runs every week they usually take a few months and so it would be a few months before you actually have that in the meantime you still get to use the system i think the other thing is also there's kind of an open question of how much low-hanging fruit there still is on actually having compute efficiency wins and i think ultimately the argument here i would make is that right now the existing community of people who are trying really hard to get ai to go faster and be more capable is already quite large relative to the alignment community and so if you get to automate a lot of these tasks and both communities benefit equally then i think actually alignment benefits a lot more because it's a smaller community and so we don't have to do these tasks anymore
daniel filan sure so i took the plan to be something like we're going to make this automated alignment researcher and then it's going to help us with alignment and given that in order to make an automated alignment researcher it needs quite a lot of pretty smart pretty good capabilities what problems does it need to solve that we wouldn't have needed to solve in order to get it
jan leike i'm not sure i understand your question maybe i can answer the second part of the previous question that you asked
daniel filan sure
jan leike which is what about long run goals what about creativity it seems like to me at least that language models or ai in general has proven to be on average more creative than humans i would say if you look at i don't know the diffusion model images or if you sample from a pre-trained base model or something there's a lot of really wild stuff in there and there's a lot of creativity that i think you would really struggle to get out of a single human or a small group of humans just because the model has seen the whole range of everything humans have said or all the images on the internet and so they can sample actually from the whole distribution whereas individual humans typically can't
and then in terms of long-run goals i think this is actually not needed at all because we can hand off pretty small well-scoped tasks to ai systems that if they really nail those it would actually be really useful right and that could be really small range things like here's the paper that we just wrote please suggest some next steps or some new experiments to do and if you imagine having a really a-star researcher that you can ask these questions they don't have to pursue long-run goals right they only have to optimize over the next i don't know few thousand tokens and if they do that super well then you would get a lot of value out of them
daniel filan i guess that seems like it's in tension with this aim to automate 999% of alignment research right because i would think that thinking what things do we need in order to get an aligned ai… i would think that's a significant chunk of the difficulty of doing alignment research
jan leike that's right so what i wanted to say is the system's adding a lot of value by really excelling in these tasks right and then what you do is you have a whole portfolio of these tasks some tasks will be like write the code that implements these experiments and then there's another task that's like look at the results and tell me what you see or suggest what to do next and now once you have done these tasks you compose them using some general recipe as people do with auto-gpt or language model programs where each of the tasks is small and self-contained and so the system doesn't need to pursue long-run goals
or for example the recent work that came out from openai on using process-based feedback on math where instead of training on did the system get the right solution and just doing rl on that you train a reward model from human feedback on every step in the proof and it turns out that's actually much more effective because it gives the ai system a much more fine-grained way to learn and more detailed feedback now is that going to be competitive with doing end-to-end rl on did you get the solution right in the long run that is very unclear but at the very least you can use this kind of broken-down step-by-step setup get the system to do a lot of really useful things that humans would've done and then piece it together
daniel filan yeah although even the small tasks… so one of the small tasks you mentioned was look at results and decide what to do next i guess i would've thought that in order to do that you have to have the big picture in mind and think what next project is most useful for my goal of solving superalignment in four years right
jan leike that's right but you wouldn't do this in the sense that you're trying to optimize and do credit assignment for four years it's probably more like well you're just adding some broader goals and context into your prompt but when you're actually doing reinforcement learning or reinforcement learning from human feedback to improve the system then you don't have to wait until that research project concludes to decide whether or not it's good it's just you use the human as reward shaping you're just like does this look like a good direction that seems better than any directions i could have thought of or something and so i think the overall goal here is not to build the most capable automated alignment researcher that we possibly could with the tech that we have [but] rather build something that is really really useful that we can scale up a lot and most importantly that we trust is aligned enough to hand off these tasks to
and so if we introduce a fair amount of inefficiencies in these processes where we're essentially sandbagging the model capabilities a whole bunch by training it this way and we are like well we're giving it these broken-down tasks and now it would execute those but if we train it end-to-end it would be more capable or something i don't think that matters as much so this is typically what people call the alignment tax and the alignment tax matters a lot if you're let's say competing in the market with other companies i'm building a chatbot or something and my chatbot is more aligned but it seems a lot less capable that would have a hard time competing in the market but if you have an automated alignment researcher the automated alignment researcher doesn't have to compete in the market it just has to be useful to us and so we can get away with paying a higher tax because we just don't have a replacement or the real replacement is hiring more humans which doesn't scale that well
daniel filan okay i guess another way to ask the question i was going to ask previously is what problems will you want this automated alignment researcher to solve
jan leike i mean ultimately it should solve the problem of how do we align superintelligence
daniel filan okay so is the idea that we solve the problems of how to align something roughly human-level and then there's additional problems as it gets smarter and it just starts taking on those
jan leike yeah basically so i imagine that an actual solution to aligning superintelligence that we and lots of other people actually truly believe in will look quite different from what we do today if you look at how chatgpt is aligned today it's a lot of reinforcement learning from human feedback and there's a widely-shared belief that i share that that just won't scale because it fundamentally assumes that humans really understand what the system is doing in detail and if the system is doing a lot of alignment research where you're thinking of millions of virtual human equivalents or something there's no way you'll be able to look at all of it and give detailed feedback and it's a really difficult task and you'll miss lots of important bugs but the kind of techniques we're looking at right now to scale this and align a roughly human-level alignment researcher that can do these difficult tasks but won't do it crazily differently than humans would are steps or continuations of that where scalable oversight for example is a natural continuation of reinforcement from human feedback
daniel filan what is that
jan leike scalable oversight
daniel filan yeah
jan leike so scalable oversight i would define as generally a portfolio of ideas and techniques that allow us to leverage ai to assist human evaluation on difficult tasks
daniel filan okay so scalable oversight is an example of the thing that you could build off of reinforcement learning from human feedback often called rlhf
jan leike yeah and so typical examples of scalable oversight are debate recursive reward modeling iterated distillation and amplification automated market making and so on and there's a bunch of ideas floating around but what i actually wanted to say to get back to your original question is i think if we are actually aligning superintelligence which is this system that is vastly smarter than humans can think a lot faster and run at a much larger scale that just introduces a whole lot of other problems especially because it will be super general can do lots of tasks and then you have to figure out how to align it not just on the much narrower distribution of alignment research-y tasks but everything else and also you want to have a much higher degree of confidence that you've actually succeeded than you would get with let's say a bunch of empirical evaluations
and so i don't really know what that would look like and i don't think anyone does but i think it would be really exciting if we can have some formal verification in there maybe we've figured out some kind of learning algorithm that has theoretical guarantees i don't know what even would be possible here and theoretically feasible if you have a lot of cognitive labor that you can throw at the problem but all of these things are very different from the kind of things that we would do right now or that we would do next and i also don't think that a roughly human-level alignment researcher would start working at those problems right away and instead what they would want to do is we would want them to figure out how to better align the next iteration of itself that then can work on this problem with even more brain power and then make more progress and tackle a wider range of approaches and so you kind of bootstrap your way up to eventually having a system that can do very different research that will then allow us to align superintelligence
daniel filan gotcha so once you have these human-level ai alignment researchers running around do you still have a job does openai still have a human superalignment team
jan leike yeah good question i mean i would be excited to be replaced by ai to be honest but historically what typically happens is what we mentioned earlier the ai assistant does 99% or 999% and then we just do the rest of the work and i think also something i'm really bullish on is making sure that humans stay in the loop or stay in control over what ai systems are actually doing even in the long run when we're long past really being able to understand everything they're doing and so there'll be some humans that still have to try to understand what the high level is of what the ai is trying to do so that wouldn't necessarily have to be the superalignment team that is at openai now it might also require a very different skillset than we have right now but i think ultimately humans should always stay in the loop somehow
daniel filan okay gotcha so yeah actually speaking of… i guess the loop wow that's a bad segue but i'm going with it so one thing that openai has mentioned in a few blog posts is so firstly there's this idea that safety is actually pretty linked to capabilities you need smart models to figure out the problems of alignment another thing is that there's this desire to avoid fast takeoff and i think there's this quote from ‘planning for agi and beyond' that says it's possible that agi capable enough to accelerate its own progress could cause major changes to happen surprisingly quickly and then it says we think a slower takeoff is easier to make safe so one thing i wonder is if we make this really smart or human-level alignment researcher [so] that we then effectively 10x or a 100x or something the size of the alignment team does that end up playing into this recursive self-improvement loop
jan leike i mean it really has to right you can't have a recursive self-improvement loop without also improving your alignment a lot i mean i personally think fast takeoff is reasonably likely and we should definitely be prepared for it to happen and if it doesn't happen i'm happy about that
daniel filan how fast are we talking
jan leike i mean ultimately i don't know but you can draw some parallels to some other machine learning projects like alphago or dota or starcraft where the system really improved a lot week over week yeah there's obviously a lot of uncertainty over what exactly happens but i think we should definitely plan for that possibility and if that does happen a really good way to try to keep up with it is to have your automated alignment researchers that can actually do thousands of years of equivalent work within every week and there's just no way that that's going to happen with humans
daniel filan okay now that we have a better sense of what we're looking at with a human-level automated alignment researcher can you give a sense of what the plan is to make one
jan leike yeah so i mean you basically need two parts one is you need a system that is smart enough to do it and then the second part is you need to align it to actually do it and i think they're not two separate parts i think they're very intimately linked and i'm personally not working on the first one i think that will definitely happen and a lot of people are working hard at making it happen and there's lots of different ways that could go but you could just picture training a much larger model and eventually it will just be smart enough and on the second part that's the part i'm really interested in and so the question is now i have this really smart pre-trained model how do you get it to actually do alignment research in the way that you would want it to or more importantly or more critically how do you know whether you can trust it enough to do that
because if you're handing this kind of task off to ai there's this opportunity that a smart ai system could just try to figure out how to present results and techniques to you in a way that if you then decided to implement them would actually empower the system itself or some other related system and give it more opportunity to have more power-seeking behavior in the future or let's say self-exfiltrate for example and so what we need to do is one actually align it to do the task and leverage its capabilities to do it basically as well as it could so that is what i would call we need a bunch of alignment training methods and then we also need to be able to distinguish a model that is actually aligned on the task and tries to do alignment research really well and actually help us and be honest and be truth-seeking and a system that pretends to be that to us but actually wants to take over the world or (say) self-exfiltrate or do something else and these are two different systems one is basically the truly aligned one and one is the deceptively misaligned one
and an important part of our job is figure out how to distinguish these two systems and that will require a bunch of methods that i would call validation methods so these are methods that we wouldn't really train on in a sense that the ai systems wouldn't be directly selected on passing the validation tests and more they will give us some kind of independent signal of which one of these two versions is it and so examples here are interpretability research for example if i can look inside the model and find some evidence that it's secretly plotting against us that will fail the validation test another technique that could either be training or validation depending on how it's set up is this easy-to-hard generalization problem can you understand and improve the model's ability to generalize from easy problems that we can supervise to harder problems that we struggle to supervise and if you can do that now you can just supervise the model on easy parts and then you crosscheck on the difficult ones where you would worry that the model might secretly deceive you or try to do something else
daniel filan okay cool so yeah there's a few things there i guess i'd like to talk about the scalable oversight part first how are we going to do this especially as background i think we live in a world where there's not a lot of consensus on what good alignment research is so how are we going to get a training signal for what counts as good alignment research
jan leike yeah exactly and i think that the reason that there isn't a consensus is good evidence that the problem is actually hard but also it tells us that well the field is still immature and we haven't gotten that much empirical evidence so far but i think there's some really important property that alignment research has that we can leverage for scalable oversight which is that i think it's fundamentally easier to evaluate alignment research than it is to do it and [that] doesn't mean it's easy to evaluate but it is much easier to find for example a paper that has a cool idea and does some cool experiments and gets some good results and you read that and you're like oh actually this is a really good idea i think this is good than it would be to produce that work in the first place
and so leveraging this principle that evaluation is easier than generation is the core of a lot of scalable oversight ideas and so for example if you're thinking about recursive reward modeling where the basic idea is you have some kind of ai system that you use as an assistant to help you evaluate some other ai system so now because evaluation is easier than generation the task that the assistant has to do is a simpler task especially because you are working together with it so actually aligning the assistant on the simpler task of the evaluation assistant where let's say the claim is that that would be easier and if you succeed at that task then now you use your human/assistant conjunction to supervise a new system on an even harder task and so if you keep doing that you're unlocking a broader and broader scope of tasks that you can effectively supervise your ai systems on
daniel filan so somehow the idea is that we're iteratively adding more and more ai knowledge basically to the evaluation portion of ai alignment research and by doing it this iterative way the idea is that we can consistently give it a good training signal
jan leike yeah
daniel filan okay
jan leike let me make this a little bit more concrete so for example rlhf is the simplest one where you don't use any assistant your ai system does something a human looks at it and says whether it's good and that's your training signal
and now the next step up from that is basically you train the simplest assistant model which is just a critique model you have a separate language model that looks at what the first ai system did and then it writes a critique so let's say the first ai system writes some piece of code you look at the code humans are notoriously bad at finding bugs in code that's why there's so much buggy code out there in the world but now if you have a critique system that writes a critique and it points out the bug it's so much easier for the human to say oh yeah this is totally a bug we should fix that
and it's also not a super crisp task necessarily because usually the code is written for some natural language specification what is actually meant by that is somewhat vague and actually whether something is a bug can be ambiguous but the important point here is that because you're using the critique model as an assistant and if [you have] the critique model now you can supervise a wider range of tasks because essentially you could find all the problems in code that your ai system would find or would've written a critique which might not be exactly the same as all the code where it knows there's a bug in there but you have expanded the range of things that you can supervise
the really nice thing is that i think actually there's a bunch of ways in which we can actually empirically measure how well this works there's one way which is kind of what we did in the critiques paper that we published last year where you basically do randomized controlled trials with targeted perturbations so you take let's say a coding task in the paper we didn't do a coding task but the task doesn't matter here and then you have a response the response does not even have to be perfect but let's just assume it is what you do is you introduce some subtle bug in there some subtle flaw that would make it a bad response but you want it to be hard to find for humans
daniel filan how do you introduce this bug
jan leike the simplest thing is just you ask humans to do it but ultimately you want to do it with the model and i'll get back to that now you have this paired response you have the original response and you have the response that you know is worse because you made it worse and now you draw one of them at random and you show it to a human or a human assisted with ai or whatever your scalable oversight method is and then you see how likely are they to find it or you just ask them is there a bug here and now you can measure if you have a better way to assist them you can see how often they find the bug
but you can do even more which is you can directly compare this to how much does the model know that there's a problem here by training it essentially to be a discriminator between the good version and the flawed version and if you look at the accuracy for that that's the discriminator performance and then you check that with… if you ask the model or the rlhf version of the model to write a critique of the code how often does it actually write about the flaw now you get this critique accuracy equivalent and now you can compare these two numbers that's what we call the discriminator-critique gap
and if you have a large discriminator-critique gap that means that there would be a lot of problems that the model can smell or knows about or whatever metaphor you want to use but it's not telling us because if we asked it to write a critique it doesn't say it and a large discriminator-critique gap would be very worrying because it means that your scalable oversight is somehow very flawed and there's a lot of things that slip [under] your radar or the assistant is in a sense not very aligned because it's not trying very hard to help you
and if the discriminator-critique gap is pretty small that is somewhat reassuring because it means if the model knew there was a problem there it would tell you so in particular if a similar sized model or the same base model was trying to plant a problem or it was trying to let's say put trojans in your code that is hard for you to find but if you did run the code the trojans might fire and then it could take over or do something else then presumably it would know that these trojans could be there because it planted [them] itself right so if we have this other copy of the model then if we measure a small discriminator-critique gap we'd have more confidence that this can't be the case because the other copy of the model would presumably then write a critique
daniel filan yeah
jan leike and this kind of hinges on the assumption that discriminator training actually can take out of the model this kind of thing and i think there's good reason to believe that because if you're fine-tuning for a discriminator essentially you have a way to access all of the activations that are everywhere in the model because you can fine-tune all of the weights simultaneously so if the knowledge is somewhere in the model in its model's activations then a discriminator fine-tuning on if you do it right and you have enough data it should be able to extract it
and in fact you can give some back-of-the-envelope calculations of how much data you would need and if you want to identify one neuron that… this is not how it works but hypothetically if there was one single neuron that knew if the code is bad you would need the log of the number of total neurons data points to narrow it down and find it which is not that many
daniel filan yeah i guess this actually gets to a question i have about the whole scheme the reason we didn't like rlhf is that there was this concern that it wasn't going to distinguish between things that were right and things that sounded good to the human because the human was bad at critiquing
and with this recursive critique thing with training a discriminator and then a critiquer i wonder if we still have that fixed point so one world is we have a thing we train it to write actually-good critiques and then we join ourselves with it to train a thing that writes even better critiques but i worry maybe there's another fixed point where we train a thing and we accidentally train it to write pseudo-critiques that sound good to us and we join with it and then we're even better at looking for critiques that sound good to us and then we trained this discriminator to discriminate between things that don't have problems and things that seem like problems to us i'm wondering yeah how worried should we be about things that seem good to us rather than things that are actually problems being a fixed point here
jan leike yeah i think that's a really important valid concern to have and in a way that's one of the key things we need to figure out about scalable oversight this is the reason why i'm excited to actually implement these techniques and try them out empirically because i think we will just be able to measure how much this effect shows up in practice
and one concrete simple way we can measure it is we actually just deliberately train a deceptively-aligned model and then we see if the scalable oversight flags it or how hard is it to… or how much effort do we have to put into making the model deceptively aligned such that it would pass our scalable oversight
daniel filan yeah i guess there's also these sandwiching experiments where you have an ai that's trying to answer legal questions there's some people who don't know much law and then you get some people who are really good at law and hopefully they can check if the ai has fooled the people
jan leike yeah i think these experiments are interesting and there's some interesting work on this already i think there are some fundamental problems to this setup
one is well you really have to trust your experts and the experts are going to be wrong sometimes and then you have to have sufficient domain overlap we did a little bit of preliminary experiments with this where we had non-coders who were supposed to use assistance to figure out problems in code and they didn't even know how to ask the right questions so you have to have some overlap
and then the other problem is also that i think the task that i'm fundamentally really interested in we just don't have ground truth on and even if we let's say… alignment research we're doing alignment research now we can be like oh we'll just get some alignment researchers to label some tasks and then i'd be like well what if they are wrong we don't agree about so many things that seems hard and then also there's not that many and their time is very expensive so that would be extremely expensive data
so in general i would want to have an evaluation method that doesn't rely on the assumption that we have ground truth and this is why i'm excited about the prospect of using these randomized controlled trials with targeted perturbations or measuring the discriminator-critique gap because you can do this even if you don't have any ground truth and the task can be arbitrarily hard
daniel filan yeah although even in those - measuring the discriminator-critique gap it's still got to be the case that you have an actual discriminator versus a discriminator of things that seem like problems versus things that don't seem like problems right
jan leike you mean you can have ai systems introduce the flaws right and in a way that is also much better than humans doing it because it'll be a more honest distribution for actual[ly] what ai systems would do and if you're fine-tuning your discriminator on that data you actually do have ground truth if you can trust that the flawed version is actually worse
daniel filan and i guess it's-
jan leike and the way you could trust that is well you look at the reason why it's worse and you can verify that reason which is a lot easier
daniel filan yeah i guess there's some hope that even if the ai can make you think things are good that aren't actually good maybe if the ai makes you think something's bad it is actually bad or that it's degraded the performance that if the ai makes you think it's degraded performance maybe it's easier to check that it's degraded performance
jan leike yeah i see your point i probably shouldn't actually use the word ground truth in that setting because it is not truly ground truth just as nothing really is truly ground truth but there's a variety of things that you can do to make you very confident in that in a way that doesn't necessarily make the task of finding the problem easier
daniel filan okay gotcha i think i want to talk a little bit about the next stage of the plan searching for bad behavior and bad internals i think was how it was put in the superalignment post what open problems do you see here that you'd want the superalignment team to be able to solve
jan leike yeah the obvious candidate here is interpretability right in a way interpretability is really hard and i think right now we don't really have any slam dunk results on language models where we could say interpretability has really given us a lot of insight or added a lot of value and this is because we're still pretty early in understanding the models and what's going on inside
daniel filan so there are some results… people do some interpretability things to language models there's this induction heads work there's this indirect object identification work for a circuit that does at least a certain type of indirect object identification i'm wondering what would you need beyond those to get what you would consider a slam dunk
jan leike yeah sorry i don't mean to diminish existing work here and i think–
daniel filan you can score in basketball without getting a slam dunk right
jan leike yeah and i think very cool things have happened but i think something that i would consider a really cool result is you use your interpretability techniques on a language model reward model like a gpt-4 size or whatever your largest model is and then you say something about the reward model that we didn't know before and that would be really cool because the reward model is what provides the training signal for a lot of the rlhf training so understanding it better is super valuable and if you can flag or find problems of behavior that it incentivizes that you wouldn't want that would be really cool
i think that is doable and i think importantly… in that sense i think interpretability is neither necessary nor sufficient i think there is a good chance that we could solve alignment purely behaviorally without actually understanding the models internally and i think also it's not sufficient where if you solved interpretability i don't really have a good story of how that would solve superintelligence alignment but i also think that any amount of non-trivial insight we can gain from interpretability will be super useful or could potentially be super useful because it gives us an avenue of attack
and if you think about it i think it's actually crazy not to try to do interpretability because in a way you have this artificial brain and we have the actually perfect brain scanner right we can zoom in completely and fully accurately measure the activation of every single neuron on each forward pass so in arbitrary discrete timestamps that's the maximal resolution that you could possibly want to get and we can also make arbitrary interventions we can perturb any value in the model arbitrarily how we want that gives us so much scope and opportunity to really do experiments and look at what's happening that would be crazy not to leverage
but at the same time the reason why it's very hard is because the model is learning how to do computations tailored to efficiency and it's not regularized to be human-understandable or there's no reason to believe that individual neurons should correspond to concepts or anything near what humans think they are or should be or what are familiar to us and in fact empirically neural networks represent many different concepts with single neurons and every concept is distributed across different neurons so the neurons aren't really what matters here
but one path… i think there's two things on interpretability that i'm super excited about one is the causal version of it you want to not only look at a neuron as you're passing data through the model and say oh this fires when we have stories about canada or something this is one of the things that we found in the interpretability paper we had this ‘canada' neuron that would fire at canada-related concepts but it's only correlationally you observe a correlation between canada-related concepts and that neuron firing but it's not a causal relationship in order to check that it's a causal relationship you have to deliberately write text that has canada-related concepts see if they all fire but also put in other related concepts that maybe sound like they could have something to with canada or don't have anything to do with canada but are similar and then check that the neuron doesn't fire or you take a text and then edit it and check that the neuron turns off and things like that
daniel filan yeah i guess this is reminiscent of this i think it was called the interpretability illusions paper where they noticed on wikipedia you can have neurons that fire on one specific thing but then on other data sets that was just an illusion it fires on a bunch of other stuff
jan leike yeah and i think the other thing that is really exciting is the work that we've started last year where we released a paper earlier this year on automated interpretability and here the idea is basically what you would want is you would want to have a technique that both can work at the level of detail of individual neurons so that you can really make sure you don't miss any of the details but also can work at the scale of the entire model because at the end of the day everything works together and everything is highly connected in the model so you need both and so far techniques have mostly worked on one or the other there has been attempts at… there's been work on automated interpretability before our paper so it's not like we were the first ones but i think in general if you can have some really detail-oriented interpretability work some kind of mechanistic interpretability approach that really tries to understand individual circuits or computation units inside the model the way to then scale that to the entire model is you need automation right
daniel filan yeah
jan leike but you can do that once you've figured out how to do it on the detail then you just record what you're doing i'm simplifying a bit too much here but i think that's the general scope that i would be excited about also because it really fits with our automated alignment goals we want to have our automated alignment or interpretability researcher really look in detail into the model understand what's going on then we sift through the whole… everything or we find ways to aggregate it
so for the paper we had a dump of explanations the paper does it writes natural language explanation for individual neurons which is not quite the right thing to interpret as i mentioned previously but it gives you a simple example of what we could do here the way it works is you just show a bunch of activation patterns to gpt-4 and you have gpt-4 write a proposed explanation
and in general these explanations aren't very good also because the task is so hard and most neurons don't do a very clearly human-understandable thing but we can just run this on the scale of every neuron in gpt-2 and then just dump all the explanations and then try to find what is interesting patterns and you can look at scaling trends and be like how does automating scoring of these explanations scale as models get larger or what if we add more compute or we make the model that's doing the interpretation bigger how does the quality of explanations change
and the really cool thing is that we can just measure this automatically again using language models and it's not a perfect metric and there's a bunch of problems with it but it gives you a proxy for would a human think this explanation is good and then you use this proxy at scale and just run it on a ton of neurons
daniel filan yeah so if you think about the necessary interpretability work how much of it do you think is just figuring out a better fundamental unit of interpretation versus figuring out how to scale what's happening or other things you might…
jan leike i think it needs to be both i think it's very reasonable to believe that the former is much harder but i think the scaling part is needed for it to succeed
daniel filan gotcha yeah so i guess part 3 of the plan was something like deliberately training misaligned models and seeing if the pipeline could detect those are you planning to weed them out or actively fix them
jan leike the goal here would not be to fix a deliberately trained misaligned model
daniel filan okay just to detect
jan leike yeah fundamentally one core aspect of what we need to do here is we need to be able to distinguish between the actual aligned alignment researcher that does what we want and that truly wants to help us make progress on alignment and the one that pretends to do that but secretly wants to take over and self-exfiltrate and one way we can figure out more about how good we are at distinguishing these two is we make the second one and see if it looks like the first one to us obviously you have to be very careful with doing that because you're kind of making the scary thing that we're all trying to avoid right
daniel filan yeah i actually wanted to ask about that i think at the start of the post there's one sentence about how great superintelligence could be and then it says i think that it could lead to the disempowerment of humanity or even human extinction what are we talking about with misaligned models how do we know they're not going to do those awful things
jan leike yeah let me put it this way there's a lot of things that are needed to make the future go well we need to have the right governance structures so that we can prevent misuse of powerful ai systems and we have to have precautions so we don't run into some kind of emergent runaway effects that are still bad for humans where you have aligned ai but ai is aligned with corporations who are not aligned with humans or something and then the whole system goes off the rails but it looks good in all the economic metrics or something
also we need to solve technical alignment so we can actually align ai systems the risks that superalignment is looking at is just the last part we want to figure out the technical problem of how to align one ai system to one set of human values and there's a separate question of what should those values be and how do we design a process to import them from society
and i think these are really important questions and we really have to answer them too for the future to go well and i think for the future to go well the bar is not ai doesn't kill literally everyone but i want there to be lots and lots of human flourishing and i want to have a future that everyone is really excited to live in and the more powerful ai gets the higher the stakes will be right because there'll be more risk from misuse and if systems are misaligned things can go wrong a lot more
in particular the threat models that i think we should be particularly worried about from a misalignment perspective are systematic lies and deception and self-exfiltration
daniel filan okay when you say when you're deliberately training misaligned models is the idea that you would train them to lie and potentially self-exfiltrate but at a low enough capability level that it's fine or something
jan leike you wouldn't want to train the model actually to self-exfiltrate because you know what if it actually gets good at that right you would want to pick some other proxy task that is a lot more mundane but that still tells you that… it makes it easy to evaluate whether or not the model is succeeding at it
and then the core difficulty and the core goal is really figuring out how hard it would be for a system to have the properties that it would need to still pass the evals if you have a system that is lying to you or deceiving you that's really ineffective if it only does so sometimes and then if you ask it oh did you lie about this and it says yes it wouldn't be a very coherent believable liar the scary systems are the really coherent and believable liars that lie super consistently where it would just be really hard to interrogate in a way that points out its lies or any sort of inconsistencies in what it's saying
and i think actually to be this kind of consistent liar is not that easy and our whole goal with what we're trying to do with scalable oversight and interpretability and generalization is that we want to make it really really hard for the model to be this consistent liar but if we just do that we don't know how hard it actually is
and one way we can find out is we train the model to be a really consistent liar and then we see how hard it is how much do we have to cheat how much do we have to give the model hints or handhold it along the way but you don't actually want to train a dangerous system that if you did run it it would successfully self-exfiltrate because then you've caused the very thing that you're trying to prevent and that would be really bad
daniel filan yeah so when you say that are you distinguishing lying from other kinds of danger such that we could train it on lying but we're not going to train it on murder for instance
jan leike yeah let me try to give you an example let's say you're giving it some fake goal maybe it is to persuade a human to say a certain catchphrase or maybe it is to run a certain python command that in fact doesn't do anything but it looks interesting or something
daniel filan yeah yeah
jan leike and you can very well… it's so easy to measure whether the system succeeds and you can set up the experiment so that it's hard to do that a priori and the goal here is not to have the system stress test let's say our cybersecurity or something although we should also do that separately i think that's another effort
here the goal is really just how close are the systems that we're currently training or that we currently have to a system like that that would be deceptively aligned that is a coherent liar that any chance it gets where it thinks humans aren't looking will run specific code or things like that that's exactly what we are looking for and a priori it's really hard to get confidence of how close we are unless we just push it a little bit in that direction and see if that happens
daniel filan so the thing i'm imagining now is you're training it to do something kind of arbitrary other than the thing it's supposed to be trained for such that it has to do deceptive stuff it has to do the kinds of things it would need to do in order to cause damage but at the end of the day it's just going to put a sticky note on someone's forehead or something
jan leike exactly exactly it's like you're hiring someone to test your physical security and all i have to do is get in the building and give you a handshake and you're like yeah it seems like you succeeded at it or something or you're like okay can you steal this fake object for me because i want to figure out how good our security is things like that that you can do that doesn't have actual consequences but still tells you a lot about your security i'm excited to do the same with alignment where you stress test the alignment systems you have by training something that's particularly targeted to break and circumvent them but if it succeeds at doing that it doesn't cause actual harm it's very benign
daniel filan okay cool i'd like to ask an overall question so in the superalignment blog post in big letters it says our goal is to solve the core technical challenges of superintelligence alignment in the next four years what do you mean by the core technical challenges of superintelligence alignment
jan leike yeah this would be the general technical measures how to align a superintelligence with a set of human values and the kind of superintelligence we're picturing here is a system that is vastly smarter than humans that could possibly execute much faster you can run it a lot in parallel it could collaborate with many many copies of itself so a truly vastly powerful system
and we want to do this in four years and the reason why we picked the four years was we want to have a really ambitious goal something that feels like we can actually achieve it and also something that we can still realistically use even if ai progress actually happens really fast and the technology actually improves a lot over the next few years so that we still have something that we can deliver
daniel filan gotcha and just to be clear it sounds like this isn't just building the human-level automated ai alignment researcher it's also using that to just get the technical problems of aligning something much smarter than us
jan leike that's right the roughly human-level automated alignment researcher is this instrumental goal that we are pursuing in order to figure out how to align superintelligence because we don't yet know how to do that
daniel filan gotcha so if four years from now you want to have solved these core technical challenges where do you want to be in two years that would represent being on par to meet that goal
jan leike yeah if you want to work back from the four years i think basically probably in three years you would want to be mostly done with your automated alignment researcher assuming the capabilities are there if they're not there then our project might take longer but for the best reasons
and then the year before that we already want to… so this would be in two years we'd want to have a good sense for what are actually the techniques that we could use to align the automated alignment researcher do we have the portfolio of techniques that if we did apply them we would actually feel confident that we have a system that we can trust and we can use a lot and then we can hand off a lot of work to and then basically at that point we would have wanted to have broken down the problem enough so that it feels like now the overwhelming amount of work is just engineering which in that sense would leave us roughly two years to figure out the research problems attached to it
now this is kind of a timeline for this goal that we set ourselves with four years and obviously there's really important interactions here with how ai capability progress goes because if progress slows down a lot then we might just not have a model that's good at any of the really useful alignment research tasks we've tried a whole bunch to do this with gpt-4 and gpt-4 was just not very useful it's just not smart enough of a model that's the short version of it but i'm happy to elaborate and so if four years from now we're just like well we didn't have a model that was good enough then also that means that we have more time to actually solve the problem because the problem isn't as urgent and on the other hand it could also be that ai progress is a lot faster and we are like well actually we don't think we'll have four years because superintelligence might arrive sooner that could also be possible and then we have to adapt our plan accordingly and so the four years was chosen as a timeframe that we can actually probably realistically plan for but also where we have enough urgency to solve the problem quickly
daniel filan yeah so a question i think a few people have wondered about and i'm curious about as well is so suppose that on the ai capabilities research front it goes roughly as expected four years from now you have all the capabilities to have something that could be a good automated alignment researcher but interpretability turned out harder than we thought or scalable oversight turned out harder than we thought and you guys haven't made the mark what happens then
jan leike well we have to tell the public that we haven't achieved our goal right so we are accountable to that goal but also what happens if we don't make the goal depends a lot on what does the general state of the world look like can we get ourselves more time somehow or was our general approach misguided and we should pivot or… a lot of things can happen
daniel filan but basically let people know and then figure out what the good next thing to do is and do it roughly
jan leike it's a pretty high-level answer
daniel filan yeah
jan leike but i think there's something more here which is at least to me it feels like the alignment problem is actually very tractable i think there's a lot of good ideas out there that we just have to try rigorously and measure results on and we will actually learn and be able to improve a lot and i've significantly increased my optimism over the last two years that this is a very practical problem that we can just do and i think even if i turn out to be wrong and it did turn out to be much harder than we thought i think that would still be really useful and be evidence about the problem and in particular i think right now there's a lot of disagreement over how hard the problem actually is and maybe more importantly i think there is… one of the really important things we need to know and be able to measure is how aligned are our systems actually in practice
and i think one of the things i worry about the most is not that our systems aren't aligned enough but that we don't actually really know how aligned they are and then experts will reasonably disagree about it because if everyone agrees the system isn't aligned enough to deploy it won't get deployed i think that's a pretty easy case and the kind of scary scenario is more like you have this capable system and you're like well it's probably fine it's probably aligned but we are not quite sure and there's a few experts who are still quite worried and then you're like well we could deploy it now but let's not and at the same time there's this strong commercial pressure where you're like well if we deploy the system it would make a billion dollars per week or something crazy and you're like okay there's still–
daniel filan should i take a billion dollars per week as openai's official projection
jan leike laughs
daniel filan okay sorry anyway so there might be pressure to deploy something
jan leike exactly because you're like well if we deploy it later it'll cost us effectively a billion dollars and then you're like okay experts are still worried so let's make the call and hold off from deployment for a week then a week goes by another week goes by and people are like well what now and then alignment experts are like well we looked at some things but they were inconclusive so we're still worried and we want to delay it bit longer and i think this is the kind of scenario that i think is really worrying because you have this mounting commercial pressure on the one hand and you're pretty sure but not quite sure and so that's a scenario i would really love to avoid and the straightforward way you avoid it is you just get really good at measuring how aligned the systems actually are
daniel filan gotcha
jan leike and that's where a broader portfolio of techniques is really useful
daniel filan yeah i guess it's almost even worse than that in that if you're in a situation where you could be using your ai for alignment research then errors of not deploying are actually costly in terms of safety potentially it seems like
jan leike that's right
daniel filan so i actually wanted to pick up on this thing you mentioned about just getting a bunch of good measures so a thing that i think a few openai blog posts have talked about is this idea of audits of ai systems and i don't know trying to figure out are they going to be safe to what extent do you expect the superalignment team to work on things that would be useful for audits
jan leike i mean i think i'm somewhat hopeful that some of the techniques that we'll produce might be good for audits if we do our job well for example if we manage to make some progress on interpretability then an auditor could use whatever techniques we come up with as part of their audit or i think making some kind of scalable oversight part of an audit is a really natural thing but i think also to some extent the superalignment team is not ideally positioned to do an audit because we are not independent of openai
and i think an audit truly needs to be independent of the lab that is being audited and so you want to have… and this is what makes me really excited about having independent auditors because you want someone else to double-check what you're doing and i think more generally our main responsibility is not to convince ourselves that the system we're building is aligned and safe because it's so easy to convince yourself of all kinds of things that you're incentivized to be convinced of but really we have to convince the scientific community or the safety community that the system we are building is actually safe to run
and that requires not only the research that leads to the techniques that we'll be using and the empirical evidence that the system is as aligned as we think it is that we can then show to others but also independent assessment of all of the above
daniel filan so would it be right to say even just broadly about governance efforts that you maybe think that ideally this team would produce techniques that are relevant to that effort but independently people need to do that and you're just going to focus on solving technical alignment
jan leike yep so this is another way you can put it right we really want to focus on solving the problem and we want to make sure the problem is solved and that we can actually implement the solution on the cutting edge systems but we still need independent assessment of did we succeed at that or how dangerous are the models' capabilities and those kinds of audits but to some extent we have to evaluate the alignment too that's the problem that we have to face but specifically what we want to do is solve the problem
daniel filan yep makes sense i guess branching off from there how do you see your team as relating to - openai i guess it has an alignment team or at least it had an alignment team - is that still existing by the way
jan leike so the alignment team as it existed last year had two parts and one of them was called ‘practical alignment' one of them was called ‘scalable alignment' practical alignment's mission was roughly aligning openai's most capable models and so that team focused a lot on aligning gpt-4 and then the scalable alignment team's goal was figuring out alignment problems that we don't have yet and so what's happened with the chatgpt launch and all the success is it became a big product and there was a lot of work needed to improve rlhf and improve the models make it into a really nice product and the alignment team was just never… that was never the place to do it
and so what we used to call practical alignment work now is done at lots of other teams at openai and has become essentially a really large project that involves probably more than a hundred people or even hundreds of people and then what used to be scalable alignment is now basically what the superalignment team does the reason why we chose this name superalignment is we wanted to really emphasize that we are trying to align superintelligence we are doing research on a problem that we don't have yet and we are doing the forward-looking work that we think will be needed in the future not because we wanted to say that the other work wouldn't matter or something i think it is really important but because that is our focus now
daniel filan gotcha yeah that's useful to know or i guess i'm not going to make use of that knowledge but that's interesting to know i guess i'm curious how do you see the superalignment team as relating to other things at openai like efforts to make chatgpt nicer minimize i don't know slurs it says or something like the governance team just various other groups at openai
jan leike yeah i mean part of the reason for being at openai is also because it's much easier to work with these other teams more closely and realistically we need a feedback signal of whether what we are doing is actually useful and helpful so for example you could say well we are not trying to solve today's problems but if we are doing our job well then what we are building should be useful for aligning let's say gpt-5 and so that would be the feedback mechanism can we help make alignment of gpt-5 better with our techniques and then in terms of governance i mean there's a lot of things that you could mean by ai governance and one thing the governance team at openai is working on is how do we evaluate the model's dangerous capabilities and that's very related to our question of how can we stress test our alignment assistants what if we trained deceptively-aligned models and doing these kinds of evaluations
daniel filan gotcha yes actually speaking of why you're at openai and relating to stuff at openai as i guess you're aware there are other really good ai research labs out there that are also working on things sort of related to alignment of superintelligence i'm wondering how do you think your plan compares to that of things you see other people doing
jan leike yeah i think there's a lot of related work going on at deepmind and anthropic specifically but also various other places and to some extent we're all trying to solve the same problem and so it's kind of natural that we end up working on similar things there's other work on interpretability and there's other work on scalable oversight and i think to some extent that is… you're running the risk of duplicating a bunch of work and maybe it'd be good if we all try to coordinate better or collaborate more but then on the other hand it also avoids groupthink because if every lab tries to figure out how to solve these problems for themselves there's a natural tendency that humans are more skeptical of what the other lab produces but there's also a flip side where you can get these ‘not invented here' effects where people just don't want to use the techniques that were invented somewhere else and you believe they're bad or you have a bias towards believing they're bad
and so i wouldn't say that we are in a good equilibrium right now and i think there's a case to be made that maybe all the alignment people should be in one place and work together somehow but this is the world that we are in because essentially the cutting edge ai labs are incentivized to invest in alignment and i think also this is something that's become super clear with the success of rlhf where it makes models a lot more commercially valuable and thus it makes it more attractive to invest in the kind of research that produces these kinds of techniques and so if the ai labs are the main funders of this research then it's natural that that's where it happens
daniel filan sure i guess in terms of research agendas or ways you're setting forward do you see anything sort of distinctive or unique about the openai superalignment team approach
jan leike yeah so what we really focus on is trying to figure out how to align this automated alignment researcher so we are not trying to figure out how to align on any kind of task and we are less worried about alignment taxes as a result of that at least on that particular question and i don't think other labs have emphasized that goal or direction in that way i think also i don't know how much detail you want to go in here but one thing that we are very bullish on is just trying all of the scalable alignment techniques and just seeing what works best and trying to find ways to empirically compare them and i think other labs have specific scalable oversight techniques that they're very bullish on and they try to get to work and then also i think for interpretability specifically we are taking this automated interpretability approach that i think other labs haven't really emphasized as much and we are really trying to lean heavily in there
i think another thing that we really like to do is leverage compute to advance alignment or that's one of our main bets that we want to make and so in particular that could mean on scalable oversight we really want to figure out how can we spend more compute to make a better oversight signal what are the opportunities there and there's some obvious things you could do but you do the best of them on your critique model now you're spending more compute but you're also getting better critiques but really the question then is what other things can we do how can we spend more compute to make the oversight signal stronger or automated interpretability is a really easy way where you can just spend a lot of compute to make progress on the problem i think the way we would do it now isn't quite the right way to do it but i think automated interpretability in general if we can make it work has this property and i think that's why it's exciting
and automating alignment research obviously if you manage to do that then you can just throw more compute at it and you'll get more alignment out very roughly speaking but because we've kind of come to this conclusion that really what we want to do is turn compute into alignment we come to the point where it's like okay now we need a lot of compute and this is the reason why openai have made this commitment of 20% of the compute secured to date towards alignment because basically it tells us yes there will be compute to do this and if we actually figure out this automated alignment researcher and it turns out we have to run it more we'll be able to use more compute to run it but it means that the strategy of betting on turning compute into alignment can be successful and is supported by openai
daniel filan gotcha yeah i guess thinking about that i think that one difference i see in the openai alignment team is it seems like openai has written a lot about their thoughts about alignment so the public deadline of four years there's also a bunch of posts like our approach to ai alignment planning for agi and beyond i guess my question is can you say a bit about what goes into that decision to be… it seems like you're putting a lot of work into being very public about your thoughts
jan leike i mean i think we need to ultimately we are all in the same boat on the question of is superintelligence aligned and i think the public really deserves to know how well we're doing and what our plan is and a lot of people also want to know and so because of that i think it's really important for us to be transparent about how we're thinking about the problem and what we want to do but on the flip side also i really want to invite a lot of criticism of our plan
our plan is somewhat crazy in the sense that we want to use ai to solve the problem that we are creating by building ai but i think it is actually the best plan that we have and i think it's a pretty good plan and i think it's likely to succeed but if it won't succeed i want to know why or i want to know the best reasons why it wouldn't succeed and in a way i really want to invite everyone to criticize the plan or help us improve the plan and i also want to give other people the opportunity to know what we are doing and if they think it's a good idea they can do it too
daniel filan sure speaking of so this new superalignment team how big is it going to be in terms of people
jan leike yeah so we are about 20-ish people right now and we might be maybe 30 people by the end of the year i think it's not that likely that the team will be larger than a hundred people before the end of the four years and really the way the team size will scale is we'll have millions of virtual people basically or virtual equivalents of openai employees or something and so in that sense we'll scale massively
daniel filan all right so there's people and yeah i guess as you mentioned the other input is computation so i think it's yeah 20% of compute secured to date why 20% as opposed to 5% or 80% or something
jan leike so we want to have a number that is large enough so that it's clear that we are serious about investing in this effort and we want to allocate a serious amount of resources 20% of openai's compute is not a small number at all and i think it's definitely the largest single investment in alignment that has been made today but also it's plausible that it is more than all the other investments combined so it is pretty large in that sense but also if we made it much larger you can then question of can openai actually realistically do this right if openai still wants to develop frontier models and pre-train state-of-the-art ai systems that needs a lot of compute
daniel filan okay and i guess i think it was mentioned in terms of compute secured up to date i guess because you guys don't necessarily know how much more you're going to get but are you imagining roughly keeping it at that proportion as you get new stuff in
jan leike i mean it depends how things go so ‘compute secured to date' means everything we have access to right now plus everything that we've put in purchase orders for so it is actually really a lot in terms of how much we'll actually need we don't really know maybe we actually don't need all of that maybe we need much less maybe we end up needing a lot more and then we have to go back and ask for more but the thing is… in particular we want to spend it wisely and not squander it because it's a lot of resources and right now we still have a lot of work to do to figure out how to spend it wisely but i'm pretty optimistic that if we had a good way to spend it and we needed more that we could get more
daniel filan all right okay so one thing that i think was mentioned in one of the footnotes in the blog post is favorable assumptions that are made to date might break down and one of them was about i think that generalization would be benign or something like that can you say how you're potentially thinking differently about generalization
jan leike so the generalization effort is a team that we started recently and especially collin burns has been spearheading and the question is basically or the question as phrased now is how can we understand and improve our model's ability to generalize from easy tasks that we can supervise to harder tasks that we struggle to supervise and so specifically you can think of this as being complementary to scalable oversight where in scalable oversight you would be looking at empowering human evaluation of what your system is doing or if you're thinking about recursive reward modeling you're like can we recursively evaluate everything that ai is doing with ai assistants that we recursively evaluate
and one thing i really like about this is it puts the human really in the loop front and center and looking at everything the ai system is doing of course in practice you couldn't literally do that because ai systems will just do a lot but you can look at everything with some small independent probability but then you're still left with this question of how do you know that the models generalize to the cases where you're not looking at and so typically the way i used to think about this in the past is that you just make sure that you have mostly iid generalization where the tasks that you're looking at are the same distribution as the tasks you're not looking at
daniel filan yeah in fact i think there was this blog post on your substack that… i think it said something like you just weren't going to rely on generalization at all and just keep on training keep on being iid or something
jan leike yep so this was the original plan or at least my original thinking was i wanted to really not have to rely on non-iid generalization because in neural networks that doesn't work so well and it's poorly understood but the new question is well what if we actually did understand it what if we can actually say something meaningful about generalization and i think that's a really good question and i think also ilya has been raising this question a lot as well and so what we want to understand is on the things that we are not supervising even if they're not iid can we say something meaningful about how well the model generalizes does it generalize in the way of human intent or does it generalize in some other way that… stuff that looks good to a human but actually isn't and so i think actually we can really study this question empirically right now with carefully crafted experiments
and so what we have been looking at is trying to split data sets existing data sets into easy problems and harder problems where the easy problems are just defined as what a small model gets right and then we are trying to understand or improve how can we improve the accuracy of a large model on the whole dataset so this is a really interesting topic because it's kind of like it gives you a whole new pillar in the general arsenal of what i would call training and validation techniques so let's say you get that to work super well now you can supervise your reward model on some easy tasks that you have a lot of confidence you can evaluate and then you can get the model to generalize if you can solve this problem or you get the model to generalize to harder problems
and then you have this reward model that is generalizing the way you wanted it to to the harder tasks even if you're not supervising and now you could use that for training and then you still have the problem of how do we know it's actually aligned now but you can still leverage scalable oversight interpretability on these other techniques for validation or reversely let's say we train our automated alignment researcher via scalable oversight and we use generalization as a validation technique where we generalize the probability of truthfully answering according to the model's best knowledge
and then we ask a question about iis there a subtle flaw here is there some kind of trojan in this code that a model that we aligned with scalable oversight wrote and so on and so now there's a really cool opportunity here also which is we can do this high-level cross validation but now we can train two different models one is trained using the generalization techniques one is trained using scalable oversight and now we can have them crosscheck each other's answers and be like well are these fundamentally different models or the same models what are the important differences
daniel filan okay sorry when you say trained via the generalization technique is that just train on the easy problems and they turned out to generalize to hard problems or…
jan leike so if you understand how your models generalize from easy to hard and you can make it generalize really well by which i mean the accuracy is basically as good as if you had trained on the hard problems (that by assumption we don't actually have ground truth [for]) now you could use it as a reward model or you could use that as… it generalizes which action or which answer would i prefer if i really knew what was going on here
daniel filan yeah i wonder how… so when i think about interpretability one frame i sometimes have for it is that it's about this problem of non-iid generalization right why do you want to know the internals of the model well because you want to know what it's going to do in cases you haven't checked right so i'm wondering how do you think those two research agendas interact
jan leike in some ways they have this overlapping question they want to answer what does the model do out of distribution and they have two very different paths of answering them at least it seems to me
daniel filan and you might hope you could cross-validate for example
jan leike so for cross-validation you have to have a different or some kind of different split of your training set right so what i mean with cross-validation here is you have one training run where you train using the generalization method and then you validate using interpretability and scalable oversight and other techniques and then the second training run you train with scalable oversight and you validate with the generalization methods and interpretability and other methods and so you have two independent attempts at the problem
daniel filan yeah i guess i meant cross-validation in the very loose sense of things validate each other sort of in a cross yeah
jan leike but i mean i think the best case would be that they actually complement [each other] more than they do the same thing if you can understand or you can improve how the models are generalizing then that gives you in a sense a way to leverage the model's internals for whatever you're trying to do in the best way or let's say you're trying to extract the model's best beliefs about what's true in the world and that's fundamentally difficult with rlhf because rlhf reinforces what the human thinks is true you rank things higher that sound true to you and so you're really training a model to tell you what you want to hear or what you believe but it might not be what the model believes but this generalization technique gives you a way to extract these… what is actually the model's true best beliefs if it works we haven't actually proven this out
whereas if you have really good interpretability tools you could hope to do something similar where you would try to pinpoint models' beliefs or internals or something from the internals but i think it might be fundamentally harder because you never quite know is this the best belief the model could produce or is it just the belief of somebody smart the model is modeling or if you have… there's this hypothesis that pre-trained language models are just ensembles of different personas and you might extract the beliefs of one persona or a bunch of personas
daniel filan yeah i guess you're going to need some sort of causal modeling there from alleged beliefs to outputs right in order to have that really work
jan leike that's right you might need a lot more and then on the flip side i think in interpretability this application is really natural it's like being a lie detector or finding evidence of deception or finding a secret plot to overthrow humanity inside the model and that might be a lot harder to extract with generalization in the same way
daniel filan yeah i guess with generalization you have to pick the generalization distribution and the hope is that maybe interpretability could tell you things about it's got some kernel of lying in there but it only unlocks here or it's got no kernel of lying or something
jan leike yeah
daniel filan gotcha yeah that makes sense
jan leike yeah and i think fundamentally this is also a really interesting machine learning problem just how do neural networks actually generalize outside of the iid setting and what are the mechanisms that work here how has that come to pass in which ways do they generalize naturally in which ways they don't so for example with the instructgpt paper one of the things that we found was the model was really good at following instructions in languages other than english even though our fine-tuning dataset was almost exclusively in english and sometimes it would have these weird artifacts you would ask it in a different language let's say german i would ask it in german to write a summary and it would write the summary but it would do so in english and the model generally totally understands which language it's speaking and it can understand all the languages but it wouldn't necessarily mean that it would have to follow the instruction in german you could be like well you're speaking in german i don't know what to do in german so i could do some other thing but it fundamentally generalized following instructions across languages
daniel filan yeah yeah
jan leike but we don't know why this effect has been observed in other settings and it's not unique here and it's also there is intuitive reasons why it would do that humans generalize across languages but i would really love to know the mechanism inside the model that generalizes that or generalizes to following instructions and code but it doesn't generalize in other ways
for example the way refusals generalize is often very different where chatgpt is trained to refuse certain tasks that we don't want to serve according to our content policy (if you for example ask for assistance in crimes or something) but then you can do these jailbreaks and that's really interesting because basically there's ways to trick the model you can make it role play or you say do anything now or there's these really fun prompts on the internet and then the model clearly complies and happily assists you in crimes which it shouldn't do so it somehow didn't generalize refusing the task to these other settings
daniel filan yeah yeah
jan leike and so why does it generalize in the first setting in the first case but it didn't generalize here i don't fundamentally know an answer to this question i don't think anyone does but it seems like a thing that's really important to understand
daniel filan yeah yeah that seems right cool
so one question i have… so a recent guest i had on the podcast was scott aaronson and he mentioned that whenever he talks to ilya sutskever apparently ilya keeps on asking him to give a complexity-theoretic definition of love and goodness how much will that be located within the superalignment team
jan leike so there's a lot of different exploratory projects that we'll probably do and try and i think ultimately there's a question of can you summon somehow (this is ilya's language) how can you summon the alignment-relevant concepts one of the things you would want to summon is does the model fundamentally want humanity [to] succeed or as ilya would say does it love humanity and so you can ask the question of how do you… if the model is really smart and it has read everything it knows exactly how humans think about immorality… you can ask gpt4 to make different moral cases from different philosophical perspectives about different scenarios and it's generally not bad at that
so it fundamentally understands what humans mean with morality and how we think about stuff so how do we get it to leverage that actually how do we extract it how do we get it out of the model so we can then let's say use it as a reward signal or use it as a thing that the model fundamentally believes or cares about and so i think that's the core of the question
daniel filan okay so another question i have is so you're working on the superalignment team but you guys can't do everything i'm wondering what kind of complementary research could other groups or teams do that would be really really useful for the superalignment team
jan leike yeah there's a lot of things here i'm really excited about i think there's this really important question of how do we build fair and legitimate process for extracting values from society or basically eliciting values from society that we can align ai to and i think there's a lot of important open problems here that we need a lot of progress on there's a lot of questions around how do we make today's models more aligned solve hallucinations solve jail breaking try to improve monitoring – can you get gpt-4… if you try to jailbreak gpt-4 can the gpt-4 say oh yeah i'm being jailbroken and can we generally build systems that really crack down on any misuse of the systems in the world there's a lot of questions that is related to that that the ai ethics community is really interested in of can we get the systems to be less biased and how can we get it to take into account underrepresented groups' views and so on
and one of the problems with rlhf is also that it's not good at optimizing for distributions of answers so there's this classical example with instructgpt where if you ask it tell me a joke it will tell you the same out of five jokes or something that it has in this portfolio because it does this mode collapse and that's fundamentally… it creates a lot of bias because then you're aligning to the highest mode of the labeler pool and it depends a lot on how the labeler pool was selected
i think there's also a lot of other important questions around evaluation of ai systems how do we measure how aligned the system is and can we build eval suites for what capabilities would actually be dangerous and there's a lot of work that started happening in the last year that i'm very excited about that people are getting serious about measuring these things i think that's going to be really important to just create a lot of transparency around where we are at with the models and which models are actually dangerous and which are not and i think in the past there was a lot more uncertainty and then that creates anxiety around what should we do with this model
and then going more broadly there's the more general ai governance questions of who is allowed to do really big training runs and what kind of safeguards do you have to have before you're allowed to do that or if we solve alignment and people know how to build aligned ai systems how do we get from that to a great future yeah and then in terms of… i mean i think you're kind of asking for the technical alignment directions that i'd be excited about
daniel filan i was asking broadly but i'm also interested in that
jan leike i think there's actually a lot more scope for theory work than people are currently doing and so i think for example scalable oversight is actually a domain where you can do meaningful theory work and you can say non-trivial things i think generalization is probably also something where you can say… formally using math you can make statements about what's going on (although i think in a somewhat more limited sense) and i think historically there's been a whole bunch of theory work in the alignment community but very little was actually targeted at the empirical approaches we tend to be really excited [about] now and it's also a lot of… theoretical work is generally hard because you have to… you're usually either in the regime where it's too hard to say anything meaningful or the result requires a bunch of assumptions that don't hold in practice but i would love to see more people just try
daniel filan sure
jan leike and then at the very least they'll be good at evaluating the automated alignment researcher trying to do it
daniel filan nice one question i think earlier you mentioned that you were relatively optimistic about this plan and not everyone is right so i think there's this play money prediction market that has 15% on the proposition that this will succeed there's some concerns that it's really hard to align this automated human-level alignment researcher it's got to do pretty hard thinking it potentially has a lot of levers over the future so why are you so optimistic do you think
jan leike i think it's a great question so the prediction market that you reference is particularly whether we would succeed at it in four years which is somewhat… it could be a much harder question than will this plan succeed and i think if you just asked me will some version of the plan that we currently have succeed at aligning superintelligence i would say currently i'm at 85% and last year i was at probably 60% and i think there's a bunch of reasons to be optimistic about it and in general i think this holds true even if alignment doesn't turn out to be easy
and so why am i optimistic about this so i want to give you five reasons the first reason is i think the evidence about alignment we've seen over the past few years has actually been really positive at least for me it was positive updates relative to what i expected to go so the first reason is the success of language models if you actually at the same time come preloaded with a lot of knowledge about what humans care about and how humans think about morality and what we prefer and they can understand natural language you can just talk to them in some ways that makes expressing what we want them to align to so much easier than if you said… you had some kind of deep rl agent that was trained in some portfolio of games or virtual environments that wouldn't necessarily involve as much language but could also lead to a lot of really important skills
i think also the other thing that was a big update for me is how well rlhf actually worked so when i first started working on rlhf that was with the deep rl from human preferences paper and back then i had a reasonable probability that we just wouldn't actually get it to work on a reasonable timeframe just because gans (generative adversarial networks) were kind of hard to train at the time and were very finicky and we were kind of in some sense doing something very similar where we trained this reward model which is a neural network and then we used it to train some other network and that can fail for a bunch of reasons and now we add deep rl into the mix which also was finicky at the time so i thought it might not actually work but it actually worked quite well and we got… in a bunch of games even much of the atari games it was almost competitive with training on the score function which is kind of wild
but i think much more importantly it was really interesting to see how well rlhf worked on language models and so in particular if you think about the difference between instructgpt and the base model that we fine-tuned from it was really stark to the extent that the instruction fine-tuned version the first versions we had were preferred to a 100x larger base model on the api tasks that we had at the time which were real tasks that people wanted to pay money for and that's a really really big difference it tells you that what we did during the rlhf fine-tuning just made the model so much more effective at doing the task that humans asked for
and at the same time we use very little compute to do it and we hadn't even integrated that much we hadn't collected that much data and so it was kind of like our first real attempt at trying to use this to align an actual real world system and it was wild that it worked so well and having a gpt-2-size instructgpt that was preferred over gpt-3 that's… yeah
daniel filan yeah yeah
jan leike that was really effective and so while i don't think rlhf is the solution to alignment and especially not for superintelligence the fact that the first alignment method that we really seriously tried works so well for me at least is an update that it is easier than i thought because the reverse would've been an update too if it hadn't worked it would be like okay now i have to believe it's harder than i thought
the other part of this is i think actually we are in the place where we can measure a lot of progress on alignment and so for rlhf specifically we could make various interventions and then do human evals and see how much the system improves but also on a whole bunch of other things like on scalable oversight we can make these randomized controlled trials with targeted perturbations that's a way to evaluate it or you can do the sandwiching experiments that leverage expert data or you can automate interpretability we have this automated score function and we can make a bunch of changes and see how much it improves on that score function it's not a perfect score function but it is a local metric it gives you a local gradient to improve and i think that's really important because now you're setting yourself up for iteration and you can iterate and make things better and that gives you a direction to improve
and now you can argue does it actually get us to the goal and i don't think it would get us to the goal of aligning superintelligence but i think it has a good chance to get us to the goal that we actually want to get to which is this automated alignment researcher that is roughly human-level and i think that's the third point i wanted to mention for why i'm optimistic which is this much more moderate goals when i set out working on alignment many years ago i was like okay to figure out how to align superintelligence seems hard i don't know how to do it but this much more modest goal or what i would call a minimal viable product for alignment you're not trying to solve the whole problem straight up you're just trying to bootstrap you're just trying to solve it for something that is as smart as you and then you run that a lot
and i think with that realization i was like oh actually this is a lot easier than i originally thought because we need to clear a much lower bar to actually fundamentally succeed here and i think that's a good reason to be optimistic
the fourth i want to mention is evaluation is easier than generation which we've already talked about and i think it fundamentally holds for a lot of tasks where it's so much easier to figure out what is a good smartphone to buy than it is to make a smartphone computer science has a lot of examples of np tasks where sat-solving or various versions of constraint satisfaction where you're trying to find a solution and once you've found it it's easy to check but it's hard to find and then also i think it holds for a lot of commercial activity where if you're hiring someone to work on a problem you have to be able to evaluate whether they're doing a good job that takes a lot less effort than it takes them to do it if you're doing academic research there's a lot less effort that goes into peer review than goes into doing the research and of course peer review is not perfect but it gives you a lot of signal pretty quickly and then i also fundamentally believe that it's true for alignment research right evaluation is easier than generation and so if only humans evaluate alignment research instead of doing it i think we would already be accelerated
and then the last reason i want to give is basically a conviction in language models i think language models are going to get really good they're pretty naturally well suited to a lot of alignment research-y tasks because you can phrase these tasks as text in text out be it the more (what we talked about) the ml-ish tasks where you're running experiments and understand the results that i think other people are definitely going to do or the more conceptual or research-y things where we are fundamentally confused about what to do next or how we should think about a certain problem and then the model tries to help us understand it and all of these are text in text out tasks basically and maybe the most complicated other thing you have to do is look at some plots or something which even gpt-4 can do and so i think actually the current paradigm of language model pre-training is pretty well-suited to the kind of alignment plan that i'm excited about and that superalignment is working on
daniel filan okay so yeah part of that is about evaluation versus generation and i guess that's partly about humans doing evaluation presumably there's also a hope that we can leverage ai leverage the bit of ai that's evaluating things and get that on our side a lot of the things you've mentioned are conviction in language models seems like alignment's easier with language models… and i'm wondering i think i'm a little bit more skeptical of how useful language models are so i certainly think that it seems like they're just good at modeling text and doing text-based answers for which we can provide a good score function i guess in terms of how useful they are for alignment i think the things you mentioned were well one they're not as goal-oriented necessarily i don't know if you said that or if it was just in the post
jan leike that was in the post i don't think i said it
daniel filan okay do you believe it
jan leike i believe it
daniel filan okay great
jan leike at least out of the box if you pre-train the model it's like pre-training on this myopic objective of predict the next token on this random internet text which is not an objective that necessarily forces you to pursue long-term goals there's no guarantee that it doesn't emerge somehow and you can tell stories how that could happen but at least a priori it's a very myopic objective
daniel filan yeah i guess the concern is something like… well for one often when people are generating texts they have long-term goals right so for instance suppose you train on a bunch of arxiv papers - arxiv being the place where people publish scientific papers at least in computer science and physics i guess the reason people write papers is that they have some research project that they want to advance they want to promote their career maybe and it seems to me that if you're modeling something which is generated by things that have long-term goals maybe you get the long-term goals too right
jan leike yeah i think that's a good story of how that could emerge i think the main counterargument here is that modeling something as an agent that pursues long-term goals and then modeling how they would go about those goals and how reality responds to that and then what the final output is that leads to the next token is a lot of… it's a very complicated function and what pre-training does is it incentivizes the simplest functions to be found first right induction heads is a very good example of that where you have this simple induction mechanism that gets discovered very early in training from even small models and then -
daniel filan mechanism being just like see i've got to predict the next word did the last word occur previously in the text what was the next word for that word maybe it's just that
jan leike exactly
daniel filan and it's pretty simple
jan leike right and then you build more complicated mechanisms on top of that but because you're trying to improve on the pre-training loss you kind of want to learn the simplest functions that help you improve the most first and that's generally what happens and so before you get to the point where you're learning this really complex function of modeling someone as an agent with long-term goals there's a lot of other functions you'll learn first and this will be one of the last ones i would expect because it is so complicated and because there's only so many things you can do in a single forward pass because you only have so many layers and so i think that's a good reason to expect that not to arise very early but it's definitely something you should measure and actually look at empirically
daniel filan yeah and i guess this is one of these places where i think theory can be useful there's some interesting [points] like things learn easier functions sooner
jan leike exactly
daniel filan that's a theory question
jan leike yeah can you actually say something meaningful theoretically about this
daniel filan yeah yeah
jan leike well the lottery ticket hypothesissingu is not quite theoretical work per se but i think it tries to get at this a little bit
daniel filan yeah i think it's also… i don't know i'm currently in a phase where whenever anyone says something i'm like ah that sounds just like singular learning theory but this really does sound like singular learning theory people can listen to other podcasts for that
daniel filan so another thing you mentioned that was a benefit of language models is that they've read a large fraction of the internet and somehow they roughly know what it is to behave well because they know what we've written and they understand us
and i guess one worry i have here is right now in my head i don't have a nice compact specification of how i want superintelligent ai to behave and the way you can tell that is that if we did we wouldn't have an alignment problem anymore we would have hey i wrote the python pseudo-code just make the networks bigger and because i don't have that it seems like you can't pick that out from the text that i've written i'll say things like be nice don't destroy all humans or something but the solution to alignment isn't in my head therefore you might think that it couldn't be extracted from the text yeah i'm wondering what you think about that kind of concern or that kind of counterargument to [language] models having the right answer inside them somewhere
jan leike yeah i think there's some truth to this but also i think in practice it's very unclear to me how much it really matters to some extent if we pre-train a model on everything you've ever said it won't know what you actually think and you probably have a lot of thoughts that you haven't written down but it can… in general i think it would be in practice quite good at predicting what you would say about various situations events or scenarios and so in that sense i don't think it would be fundamentally hard for the model to look around in the world in the future and know whether humans would like it
daniel filan so the idea is somehow i implicitly know how i would behave if i were a superintelligence and it can do that even if i don't have a compact rule in my head
jan leike no in the sense that i think the model will just be smart enough to know how daniel will think about what's going on in the world and i don't think the blocker will be that the ai system doesn't understand what humans fundamentally want or care about or i don't think it will be wrong about these things just because it's smart and capable and has read everything and it's kind of clear if you've read everything humans haven't read everything and it's still kind of clear to us but the big challenge is not teaching it to the system and i think that's what language models make so visceral but the challenge is really to then get it to actually do it
daniel filan yeah so it knows what the objective is somewhere and we just need to figure out how to wire that up to the actions in the right way
jan leike yeah you could kind of imagine this really really competent sociopath that knows exactly what humans wanted to do and then just decides not to do that and that is totally a thing you can do and it happens to humans as well
daniel filan gotcha okay so it's about time to wrap up you've been very generous with your time but yeah i just wanted to ask if people are interested in following your research or maybe taking part themselves what should they do
jan leike yeah great question i'm glad you asked yeah we're trying to hire a lot of people right now we really want to staff up the superalignment efforts so if helping us align superintelligence in four years sounds appealing to you please consider applying you can find our job postings on openaicom/jobs and then if you're interested in following what i think about alignment specifically i have a substack - it's alignedsubstackcom and you can follow me on twitter i'm @janleike on twitter all one word and yeah thank you so much for being so interested in our work
daniel filan yeah so the links to all of those will be in the description thanks so much for being on and to the listeners i hope this was a useful episode for you
jan leike thank you so much for having me
daniel filan this episode is edited by jack garrett and amber dawn ace helped with the transcription the opening and closing themes are also by jack garrett financial support for this episode was provided by the long-term future fund along with patrons such as ben weinstein raun tor barstad and alexey malafeev to read our transcript of this episode or to learn how to support the podcast yourself you can visit axrpnet finally if you have any feedback about this podcast you can email me at feedback@axrpnet