Lenny Distilled

Logan Kilpatrick

Head of Developer Relations, OpenAI

16 quotes across 1 episode

Inside OpenAI

Each new researcher that you add is actually a net productivity loss for the research group unless that person is up-leveling everyone else in such a profound way that it increases the efficiency. If you just add somebody who's going to go and tackle some completely different research direction, you now have to share your GPUs with that person and everyone else is now slower on their experiments.

They hear something from our customers about a challenge that they're having, and they're already pushing on what the solution for them is and not waiting for all the other things to happen that... People just go and do it and solve the problem.

Is this actually going to help us get to AGI? So there's a huge focus on there's this potential shiny reward right in front of us, which is optimize user engagement, or whatever it is. And is that really the thing? Maybe the answer is yes. Maybe that is what is going to help us get to AGI sooner, but looking at it through that lens I think is always the first step of deciding any of these problems.

It's not AI that's going to replace humans. It's other humans that are being augmented and using AI tools that are going to be more competitive in a job market and stuff like that.

I think consumers are just going to be... You are going to have an edge on other people if you're providing AI that's not accessible in a Chatbot. People are using a ton of chat and it's a really valuable service area. It's clearly valuable because people are using it. But I think products that move beyond this chat interface really are going to have such an advantage.

What I really want is just ask my question. What are people doing? What are people saying about GPT-4? Get an answer to that question in a very data grounded way. And I've seen people solve part of this problem where, oh, they'll be like, 'Oh, well here's a few examples of what people are saying and, well, that's not really what I want. I want this summary of what's happening.'

Finding people who are high agency and work with urgency, if I was hiring five people today, those are some of the top two characteristics that I would look for in people because you can take on the world if you have people who have high agency and not needing to get 50 people's different consensus.

They hear something from our customers about a challenge that they're having, and they're already pushing on what the solution for them is and not waiting for all the other things to happen that... People just go and do it and solve the problem.

Finding people who are high agency and work with urgency, if I was hiring five people today, those are some of the top two characteristics that I would look for in people because you can take on the world if you have people who have high agency and not needing to get 50 people's different consensus.

I think the cool thing about GPTs is you can package down like, 'Here's this one very specific problem that AI can solve for you and do it really well,' and I can share that experience with you and now you can go and try that GPT, have it actually solve the problem and be like, 'Wow, it did this thing for me. I should probably spend the time to investigate these five other problems that I have to see if AI can also be a solution to those.'

Context is all you need. Context is the only thing that matters. It's such an important piece of getting a language model to do anything for you.

My whole position on this is prompt engineering is a very human thing. When we want to get some value out of a human, we do this prompt engineering. We try to effectively communicate with that human in order to get the best output.

Crap in crap out. If you ask a pretty basic question, you're going to get a pretty basic response. And actually the same thing is true for humans, and you can think of a great example of this. When I go to another human and I ask, 'How's your day going,' they say, 'It's going pretty good.'

I think the cool thing about GPTs is you can package down like, 'Here's this one very specific problem that AI can solve for you and do it really well,' and I can share that experience with you and now you can go and try that GPT, have it actually solve the problem and be like, 'Wow, it did this thing for me. I should probably spend the time to investigate these five other problems that I have to see if AI can also be a solution to those.'

We're deeply focused on these very, very general use cases like the general reasoning capabilities, the general coding, the general writing abilities. I think where you start to get into some of these very vertical applications... that's a great example of our models are probably never going to be as capable as some of the things that Harvey's doing.

OpenAI has such a slack heavy culture and it really... The instantaneous real time communication on Slack is so crucial. And I just love being able to tag in different people from different teams and get everybody coalesced.