Dan (00:05) Welcome to AI and Design, where we explore how artificial intelligence is reshaping the world of design. I'm Dan Saffer. Nik (00:13) And I'm Nick Martelaro, and we're faculty at Carnegie Mellon's Human-Computer Interaction Institute. Each week, we break down the latest AI developments, dive deep into topics that matter to designers, and talk with fascinating guests who are right at the intersection of these fields. Dan (00:27) So whether you're a designer working with AI or an AI practitioner interested in design, we're glad you're here. On today's episode, we'll be discussing Figma's CEO statement that the future of design is code and canvas. Nik (00:44) A new paper from Apple on mapping the design space of user experience for computer use agents. Dan (00:49) And Wes McKinney's thoughts on the Mythical Agent Month. Well, let's get started and talk about Figma and Claude. Another big announcement between Anthropic and Figma. What is this one about, Nick? Nik (01:03) So this was an announcement saying that Cloud Code can now work with Figma and send designs to Figma to then render on the Figma canvas. So the big news here came from a short statement from Dylan Field, the CEO and co-founder of Figma. And they use this not only as a way to kind of talk about the partnership, but really to talk about where they see Figma resting in regards to the software development process today. Basically, they claim that code agents are really great for creating a single artifact, i.e. the working application, but that Figma is a better place to see the big picture. An infinite canvas is a much better way to take all of your views, look at them all, step back. and I think this is an interesting, perspective, because it's really trying to push this idea of, Hey, working straight from vibe coding alone, maybe isn't the best way to develop products and services. And that actually you do still need views into what you're trying to do at a broader level. Dan (02:10) What I thought was a little strange, or maybe not strange given the world that we're in right now, but I thought it was interesting, let's use our favorite word, that it's moving from Claude to Figma. And traditionally it's been the other way around, right? You do everything in Figma and then you send it traditionally to developers to work on and collaborate with you with. But this seems to be start in Claude, then go to Figma and polish it up there. And then what's the next step? Then do I bring it back to Claude to finish up the prototype? Nik (02:49) Yeah, so with the MCP, that is the idea, is that you can go from creating code, pushing a Figma, having editable Figma layers, and then once you're done, you can basically send the design changes back to Cloud Code to then implement in the actual functional prototype. So that is what their vision is. I think this is interesting partly because they... they have to make sure that they're still relevant. I mean, the changes that we're seeing in software development today is that you can basically go from concept idea to thinking about maybe your product requirements document to simply handing that off to a code agent to implement and then working and iterating purely in functional interface code from there. Like you don't actually need to do this step of drawing things out. my students, in my class are often doing this now. Now that they don't have to basically draw out an entire design to get their idea realized, they can just work with sort of a code agent. Now that being said, I've noticed that they are also struggling sometimes when they might have a vision and they can't they can't render it effectively by doing language to the code agent to then generate. Like they can't say the right things to actually get what they want. And I've even reminded some people like, hey, have you tried just going back into a drawing tool like Figma actually drawing what your vision is? It sounds like you know you're trying to explain it to me. And then handing that off back to the code agent to work from. Like we're in this really weird period now where processes are really changing, people are trying these things out, but also people are potentially getting stuck in working with code agents because it feels so productive. It feels so exciting, especially if you're a designer who never really had that ability before. Dan (04:43) We've talked about this before, that one of the things is trying to explain in words things you can do really fast with direct manipulation. Like, move that button three pixels over versus having to describe that is such a pain The idea that you could move fluidly between these two things is pretty cool. I can see the future where you're able to adjust a button and then export it back to Claude and then the button works and it's in the right place and that sounds amazing. But just, like you, I haven't seen any of my students really trying it. And I have seen my students get really frustrated trying to make things in Claude that match. Design system or just their idea or a micro interaction because all that stuff is really harder to do in Claude and description than it is in Figma Nik (05:27) or Yeah, one of the other things too about this statement is this idea of kind of zooming out, avoiding tunnel vision. And I do wonder, how much people are getting tunnel vision when they're working with code agents, when they're sort of sticking to like, okay, we've got something, it's functional. I mean, on the one hand, it's now easy enough to create code and functional prototypes that throwing them away should feel like it's not a big thing in the same way I would throw away my sketches. I actually honestly think you can start thinking about software as sketches that are easily thrown away and restarted. And actually in some ways I almost feel like when you put a ton of work into a Figma file, because it does take a lot of effort to really get something functional and to a high degree of quality within Figma that like you might also get locked in on what you've got there and that might not help you sort of kick out to other ideas. maybe the vision here really is that you're going to work very fluidly with sometimes I've just got an idea and I just want to render a bunch of stuff. It's actually faster for a code agent to render it quickly, get me a view or multiple views that I can see in Figma. Then I'll start to work with it from there, then go back to the code agent. Or I might start with a very strong vision. I'll work within a canvas environment and then kick it off to an agent. I mean, I think that we're going to start seeing more of this sort of hybrid workflow. And I can see that Figma is trying to position itself as we're the canvas for this. But I can also see here that this is maybe where there could be a lot of opportunity for other companies who are thinking about the future of design tools. know, maybe the canvas view is right. The question is, is it Figma or is it something else? And is there actually more than just Figma? And it's possible that they're probably thinking about this too, and probably thinking about what are the new features that are needed? It actually makes me think, know, one of the really exciting things when Figma came on the scene was it was multiplayer, you know, on day one, right? Like everyone could be in the same file. There's one source of truth. That's probably still really valuable to teams, but I guess I wonder like, what's the next act? Like what is the next thing that really sets it apart? Dan (07:43) Mm-hmm. Right. Can you imagine a world where Google or Anthropic or OpenAI basically tries to cut Figma out and is just like, we're gonna make basically these tools where you can do a lot of more direct manipulation right in the prototype, like Move that button over there so I don't have to go and export it to Figma, bring it back. It would be so much easier just to do it. These simple things in the place where you're actually prototyping rather than jumping between different programs for things. Nik (08:20) Yeah, I could see it. honestly, you can make this stuff, multiplayer. You can have it be something accessible, right? You have an online, basically virtual machine that's running the code. And so that is something where we can basically edit the code together. We can be in the same interface. I can imagine them. It may not even be that hard to simply add views. We know the canvas. The canvas is a solution. And at this point, it's not particularly hard to implement a canvas. And so I could totally see a company like Anthropic putting out a competitive product of here's canvas mode for work. And it happens to work with web design files. But you just work in code and it may not be hard for them to do that. I guess the question there then is, so what's the additional value add? Is it just the canvas view that is really valuable? Because that's probably something that can be recreated. Multiplayer is something that can be recreated. So what else is the big differentiator here? Dan (09:19) Mm-hmm. one thing I agree with figma CEO about is That you do need that overview sometimes because you're not Or at least good designers aren't just designing a single screen or a single little flow. It has to fit into an entire. Flow and there's lots of pieces to that. There's often a lot of touch points to that. There's a lot of systems thinking that you have to do in order to get that. Overview where you're like, well, this is got a notification. It's got a setting it's got. part of your profile. all these things that. could touch the feature that you're working on, you may need to see that. You may need to be adjusting those kinds of things as well. so, yeah, having that big overview, is a real benefit to something like Figma. Doesn't mean that somebody else couldn't show you that, but I definitely know for myself when I've been doing these little projects that I get wrapped up in a single screen and I'm trusting it to help me do some of the flow, which is ridiculous. I've been doing this for a long time now. Like I shouldn't be trusting it. I should have mapped out the flow and had it doing it. So I'm looking forward to this more back. Nik (10:33) Yeah, I think that's a super good point. And that systems thinking aspect of things is also something that isn't particularly well supported right now in text only format. Like actually, if you've ever tried to write out like a user flow in just a text file, it's fine for a simple, basically non-branching flow. But the reality is so many of the products and services that we design have complex branching. They've got all these different you can go on and it's much better represented by visual diagrams actually yeah this is something that Herb Simon talked about in regards to diagrammatic thinking and how there are certain problems where if you simply can represent it as a diagram you will be able to solve it much much more easily when Dan (11:03) Visually, yeah, for sure. Nik (11:20) he was doing this work. He was thinking about how basically you could get computers to try to do this as well. He was understanding it in people, but then there was this thought of like, well, how do we get computers to do this? And there is some of that now in the visual language model space where they're trying to understand diagrams. But if we can represent something in a diagram, which is where Figma and any canvas mode strong suit is, we could potentially work much more effectively and much faster than trying to solve it out with language. And then this is actually why the translation of code can be hard because you couldn't actually think through the problem in code alone. You had to think through it diagrammatically. And so this is an area that I think that Figma and really any canvas-based interface, anything that lets you work with diagrams and think with diagrams is still going to be really important for the design work that we do. Dan (12:09) That's why every design studio in the world has got some version of a whiteboard Nik (12:15) I diagram all the time. many of us as designers are also visual thinkers. And this is where I think the benefits of having good visual design tools, even if it's not just about the aesthetic or the sort of visual layout, the topography, the color and stuff, it's not just about that. Again, it's about working and thinking in a visual way. And that's really what tools like Figma afford us that right now we're not being solved at all, I think with code agents. Dan (12:43) Okay, let's talk about the new research paper from Apple called Mapping the Design Space of User Experience for Computer Use Agents. Boy, that doesn't roll off the tongue really. But interesting paper. What is a computer use agent? Nik (13:03) So a computer use agent is an AI agent, a language model, or a visual language model that uses your computer in the same way that you would use it visually by actually interacting with the interface elements that are part of a product. So clicking buttons, scrolling through pages, reading information. So that's different from, for example, just processing text or interacting with API based systems, like actually doing a call with code via an API to get information back from a service. Basically you're simulating what a human would do to actually work with a piece of software. And what this paper does is it tries to figure out, what do people want from these things? And like, how should we design the user experience? Cause right now, Many companies are putting these out. Claude has its computer use agent. There's OpenAI Operator. There's a bunch of them. I think they actually at the time reviewed like nine of them. But a lot of them are like, cool, we automate stuff for you. You say like book me a trip and it goes to Expedia and then starts booking you the trip. But like that isn't really what people want. And it's not really working a ton right now. Actually, computer use agents is still really nascent. Dan (14:18) why would you ever want or need this when you have AI agents? What's the benefit to doing this? Nik (14:26) The benefit I think is that there's still a lot of products and services, especially for the web, things on your computer that don't have programmatic access to let a code agent work with them. So we've talked on the show about MCP servers, right? Anything that has an MCP basically now has a software-based way, a code-based way. Dan (14:40) Mm-hmm. Nik (14:47) to let a computer agent use it. And it can tell it sort of how to use it. can give it the rules. It can say here are the functions. And so it can use it effectively. But there's a ton of stuff that doesn't have this, at least right now. And so the idea is like, this is a way for your code agent to just act like a person and still be able to use sites and services. Dan (15:05) got it. Well, it's funny. I read this and I mostly understood that I just wanted to be just I was just making sure that I actually understood it. But what I thought was interesting about this paper was even though it is for this kind of very specific type of technology, that it's really some good principles for AI agents in general as I was reading him like well, this is This whole thing can just be extrapolated one level higher and should be How we should be building and thinking about agents period Nik (15:39) Yeah, I got the exact same conclusion as you, Dan. in the paper, Table 2 breaks down effectively four major categories of what you should consider in regards to designing the user experience when people are working with computer use agents. So how does the user query the system? How does it ask for things? How do the agents explain their actions and activities? How can users control and steer what the agent does? And then how does the user develop a mental model and understand what the expectations are of the computer use agent? I'm thinking about this now as like, this is like almost, it's not maybe full on checklist mode yet, but like, this is definitely a framework where I could say, all right, we're trying to build an agentic AI experience for people. We need to start thinking, all right, how do they ask for stuff? Do they input things with a voice? Do they input things with text? Do they input things with some other UI element? How do we explain things? During the middle of the agent taking actions, how do they control, how do they recover from an error? And then how do we actually let people know about the AI's capabilities? How do we set the user's expectations in an appropriate manner? Dan (16:46) I feel like every paper that I read about this and maybe this has been true for every paper in this space since the beginning of the HCI field is that autonomy versus control paradigm, where, you know, it almost never seems to be resolvable. Like, is it always going to be a compromise? Nik (17:08) there's a lot here. mean, there's stuff going back all the way to the sixties and the seventies on just, you how much autonomies should say an undersea robot have actually, is, know, Sheridan and Verplank. And I'll post a link on our show notes to this amazing figure, but it was one that Bill Verplank drew that really maps out these levels of autonomy and really what levels of support does a computer give? Does it do tasks? Does it support you with your tasks? Is it taking over for your tasks? And I think that that's something that as you start to incorporate AI into your product, you should think about what is it that you're really trying to do for the user? Are you trying to just lift them up and support them? Are you trying to take tasks off their hands? you know, thing to think about here too is how much should the user be involved in that decision making? So this idea comes to me from a very recent talk from Andrej Karpathy. It's a software 3.0 talk where he kind of maps out this idea of like an autonomy slider or an automation slider. And the idea is like, you could, you could actually like dial it up or dial it down. I don't remember exactly. think he might say like, as a software developer, like you think about this, but I actually wonder almost like as a user, you can sort of dial things up and dial things down and then your interaction and your interfaces will change appropriately with that. Dan (18:28) We've definitely talked about this in my UI for AI class where you could have a slider with one end being full autonomy and the other end being completely manual and somewhere in between there's indicators of the kinds of decisions that might happen during a task like this the slider would be adaptive to whatever task it was on. And so if it's like, well, I'm going to let you do everything but pay for it, or I'm going to let you do everything but pay for it and make the final decision to buy it. Those kinds of things. I always thought that was kind of an interesting way to do this right up front. Nik (19:10) Yeah, you're making me think now. I wonder if there's like a new component that needs to be designed for at least digital services, which is like your automation slider in the same way that we've got components for date pickers and, buttons and everything else is literally like, here's your automation slider. And then here's how basically you can attach different features and functionality to it such that users can kind of make a choice. Dan (19:31) Mm-hmm. Nik (19:33) Maybe it's literally a slider or maybe it's something else. Maybe there's some other kind of cool interface design you can do there. I mean, the question that this brings up, because we talk about it as like you make a decision. mean, Dan, you and I, right, we're designers, we're thinking about this. We, when we design products are like, how much should we give? When should we give that autonomy? When should we sort of make sure that there's a human in the loop? But I don't actually know like how much users would want to choose that level for themselves. or be aware that they should choose it? Dan (20:03) actually the paper talks about this a little bit. The paper says a thing where the participants said that they would trust an agent to be fully autonomous to do something like buy a pizza, but not to buy a laptop. so I thought that was I thought that was pretty interesting. Like where is that? Threshold and for our most of our tasks, the like. buy a pizza kind of tasks or are more of our tasks like buy a laptop tasks. I suspect that they are more in the buy a pizza tasks in which case you're like, sure, go do this all the time for me. I don't care about this, but anytime it hits like a certain threshold of things and maybe that's complexity or dollar amount or Reversibility, know something like that like feels that feels like I want more control and I would imagine that a lot of other people will too, but you know, we had this discussion was it last week about people not changing their defaults and Yeah, are most people just gonna leave the default wherever it is set And just push the button, go, and tell me later when it's done. Nik (21:13) Yeah, my thought there is that probably people will leave the defaults on and so thus it comes back to us as designers to be thoughtful and responsible and picking a good default so that most people are having a good experience while also maintaining enough control over that experience. I could see some people being thoughtful enough with whatever they're doing depending on the nature of the task or the product that they're working in that they might be willing to engage with that. And then another thought is that you can design the agents to be reflective about what's going on and prompt the user to consider how much autonomy to be giving like, Hey, do you want me to do this? And actually they talk about that a little bit in the paper too. some agents actually having interfaces to go back and ask, like, are you sure you want me to do this? Like, can I move forward here? And I think, you you brought up a really good point earlier, I actually had the same thought as you on reversibility, Like anything that you can undo quickly that has very little cost. I don't know, it's okay, maybe just it's okay to kind of automate that stuff. Because as long as maybe I'm not having to reverse so often, because it can just be annoying if I'm always hitting undo that that's not a good experience. But if it's like every once in a while, and Dan (22:19) Right. Nik (22:24) It's something that can be reversed easily. It's not a big deal. Ordering a pizza, arguably actually not particularly reversible if the pizza's already on its way, but it doesn't. But the consequences of you getting a pizza might be a maximum 20, 30 bucks, depending on what city you're in. But a laptop, right, you're talking now, that's like lots of dollars. Although the funny thing about it is that actually with a pizza, you can't send it back. Dan (22:32) No, it's on its way, yeah. Nik (22:49) Whereas with a laptop, you could, you could totally just be like, I didn't open this, it. Most places will just take it back and you probably can just get a refund. So, you know, I wonder, this is actually maybe something for designers to think about, right? It is like, where is the reversibility in your systems and in your services? And maybe that's a good potential way to think about how often to be checking in with the user or when to be giving users more control. or more oversight in any type of automation that you might be also trying to add to your service. Dan (23:20) anytime there is an irreversible action or decision that definitely leads me to think about yeah this is definitely a time for some kind of human involvement potentially the other thing from this paper that i thought was kind of interesting was this idea that And maybe this is pretty obvious which was that people definitely don't want explanations for like routine stuff but it's all about risky unfamiliar stuff and You know having the agent kind of walk you more through that Versus hey, this is something that this is something that happens all the time now the problem I see with this should the agent try to detect this automatically? is it another setting or slider for this or is this like, Hey, you've never done this before. Let me, walk you through this. Even if it is something that might be routine to someone else. Nik (24:17) I almost want to like take something that I'm designing and say, we're going to add some automation to this, but code agent or other AI agent, I want you to think through this for me. Like where should the checkpoints be? Where are the risky points? Where are the non-reversible points? I actually wonder if you could build a reasonable part of a code agent. that can reason through that stuff as one part of your agentic system. And then simply make suggestions to you about, here's where we need to engage the user or have better interfaces for approval or give more explanation. No, it's a cool paper. I definitely think people should check it out. I think is even though specific in this paper to computer use agents, I think if you're building agentic systems and thinking about how to implement automation within your products and services, this is a really nice roadmap of what you should be considering among the design work that you're doing. Dan (25:13) And we should also note that the, only non-Apple researcher who worked on this was Jenny T. Liang, who is a PhD student here at the HCII. So kudos to you, Jenny. Let's talk now about the mythical agent month by Wes McKinney. So Wes McKinney is thinking back on a super famous book called the mythical man month by Fred Brooks written in 1975. It is kind of a classic of computer science slash engineering slash product development and one of the key phrases in there that probably anyone who's ever worked on a software development team is Brooks law, which is that adding manpower to a late software project actually makes it later. So you can't just keep adding people in the hope that it will make everything faster. Actually the opposite tends to happen. It tends to slow things down. And one of the things from the book is that small teams of elite people can outperform large teams of mediocre people. And now, what is that like in the world of agentic engineering? Is this still true? And I thought the article was pretty interesting, particularly when it talks about the actual cost to users and designers and engineers about just adding new features. Because now it is so cheap to add a new feature or a new system, but those features, while they're cheap to create, they are not costless to be able to like maintain and debug and and keep going. And each time that you start doing this kind of scope creep, it can actually make the product worse and make the product more brittle and introduce bugs that can be annoying or harm users. So while software is, as we talked about in previous weeks, too cheap to meter When you're working with a actual live product and you can just add stuff to it, sometimes that's not the best thing and it is not a zero cost thing either. Nik (27:39) Yeah, I think one of the exciting things about how agentic AI systems are enabling us today is that, I could basically be the lead of my own team. And with a group of AI, I can realize my own ideas. Wes basically brings up the question, okay, but is that, you the elite team and maybe you're... the one person with the vision, which actually was shown in the book that the fact that a small team, it almost feels as if the product that you're developing came from like one mind, like it's one clear vision. And in theory, it might be easier than ever to do that because it could just be your vision. On the other hand, if your vision isn't that great and you just start adding agents and especially if you start utilizing those agents to actually help you with the vision. Are you basically recreating the problems here and adding more agents is actually going to just make the products longer and basically add complexity where you don't really need complexity. Dan (28:42) The most powerful thing you can say on any. software team is no. Where whether you're a designer, executive engineer, PM like being able to say no like that's not a great product idea before you just start to add it to it. And if and if you're working by yourself with just agents, there's no one to say you know what? know you're you're overloading what this app is supposed to be so much of my time as a designer over the years was what can we remove and still have this be functional and beautiful and valuable to people rather than well we could add this thing to it we could add this thing to it and more and more that just becomes a junk product And you see these all the time, especially in consumer electronics, where they've added all kinds of bells and whistles that no one ever uses to a product because it looks good on like the outside of the packaging or it looks good to the people that buy it, like if it's in B2B software. And no one ever uses it, but they're a pain. makes the product harder to use. It makes it harder to maintain. It makes it harder for people to understand. There's a cognitive cost to a lot of this stuff. Don Norman writes a lot about this in some of his books think emotional design, think, is one of those or living with complexity or one of one of those books. Nik (30:15) Yeah, I mean, the end of this article talks about design and taste as our last foothold. And I feel like, Dan, we talk about this all the time. Now on the show, we talk about, okay, as designers, will we still have taste and we're still curating. We talk about this in class. I know I do, this is a skill in which I want my students to develop. And I think that this is where we sort of see ourselves as like, well, okay, if agents can create code, if they can do all these other things. that now it's mostly me steering, it's mostly me having a vision, it's mostly me curating. But then you have to develop that skill and get really good at it. And it's hard when many of us, even when we conceptually understand, keep things simple, don't add too many things, it typically still feels really good to just add more. we're all kind of maximalists underneath. even though we have sort of minimalist intentions. Dan (31:11) certainly designers and PMs and engineers aren't immune to shiny things that seem like a good idea. It's just all those things have a cost whether we like to think about it or not. And not many people like to be like, no. We're not doing this. the reason we get into this field is to make new stuff not to like deny stuff From getting out in the world. I do have a question about taste though I know that that's kind of the thing right now and I know that everybody is Really thinking about it and hanging their hats on it, but I wonder As we as we start to see tools like impeccable come about like is is that something that the. Agents are just going to get better and better at things like taste that the the AIs are going to have these tools that really help them have good taste. I do worry that that is a thing That we are maybe over indexing on as a discipline. I worry That's like, well, we have good taste. I know a bunch of designers who are have fine taste, but it's not rarefied taste. And I probably put myself in that category. I'm not like a visionary visual designer. Like I can tell what looks good, what doesn't, but most of the stuff is, is, is fine. It's serviceable. And I think that the AIs are going to be able to do that if not now, then extremely soon. So I do worry about hanging our hats on taste as a end all be all of what is going to save us in the end. Nik (32:56) Yeah, no, I think that's a provocative point, but I do somewhat agree. I mean, we're here in the lab trying to understand people's taste more and thinking about how we might be able to computationally represent taste. At least maybe the midline of what taste is. I mean, there are a lot of things where, you know, for many, many years, we could effectively give you a checklist of things for say visual design and say do these kinds of things and your stuff will probably look fine. And again, if your goal is creating a product that people use and don't complain about, that's good enough. The other thing too is... There's many different aspects of product development where you need to have taste. And any single designer may not always have all the taste that they need. So for example, I, even probably less so than you, am not a great visual designer. I would argue I'm more of a systems designer. And my taste really comes down into the way that systems interact and the way people kind of move and flow through systems. but I need someone else to come in and help with say visual design. I need someone else to come in and help me and provide taste on other aspects of the work. And so, you know, in that way, do you end up with things where you actually have lots of products? So if we imagine this future where everyone's designing their own products, we're all individual solopreneurs, do you end up with these things where it's like, well, It's got really great taste in this one area, but it's missing all this. Or is it that we effectively have to have some way to computationally represent at least the midline or mid plus of taste that I can at least import and buy from someone else? Actually, I we talked about this at one point where, you know, impeccable is a cool system where it kind of tries to impart at least a visual aesthetic taste on your web based products. It's free and open source. That's cool. But in theory, you could charge for that. I could charge to download the file and say, okay, I'm going to import this into my project. And so I wonder if maybe there's this interesting like marketplace that could exist someday where if you can computationally represent your specific taste, can you sell that taste? And we can basically buy what we need from other in the same way where we hire other people for this stuff. now we're sort of just buying it Dan (35:26) There's a famous essay from 25 years ago where Brian Eno predicted that he called it like a black box. He's like, in the future, you'll be able to buy a black box Brian Eno and say, here, make this like Brian Eno and you'll plug it in, you know, and it will do Brian Eno things. And I wonder if that's basically what you're describing now. Like, hey, Give me the Dieter Rams of this. I mean, Ferrari just paid Johnny Ive. umpteen millions I'm sure to design the interior of the new Ferrari and just to get the Johnny Ive touch on it. But what if they instead by the the Johnny Ive box and here it is and all right Johnny will now design this for us but he has to pay Johnny for that privilege because Johnny's the one who owns it Nik (36:22) Yeah, now I'm thinking there's gotta be almost this skill too, though, of like, meta-taste. I realize I'm getting like philosophical, but like, if you're going to say, I need to kind of import different elements of taste for different aspects of my work, you also have to have those things blend in a harmonious way. And that in and of itself is a hard thing to do. Dan (36:22) Crazy. Nik (36:45) Right. That's, think what great product managers, you know, great product leads are able to kind of wrestle with, because you always have these competing voices and you always need to sort of have a give and take among all these different aspects of something. And so I wonder if that's actually like the skill too, that people need to develop. it's like, maybe you're developing your one deep taste area in something that really sets your work apart, but then you also have enough skill to sort of coordinate and be able to pick the right things. Because I mean, yeah, sure, Ferrari has chosen to utilize, you know, Johnny Ive and the aesthetic that he's known for within their product, but they also have to make that work within the rest of the Ferrari ecosystem. Dan (37:31) That's that's actually been the problem is that people are like well this looks like Apple. This doesn't look like Ferrari That's been that's been the criticism of of that too, but right so whoever was like hey Let's let's Johnny. I've you know our our new car is and it's like well this really is on brand That's a yeah that that's definitely a decision to get back to the mythical agent month. Like one of the things is, and about mythical man month is just this idea that, doing the code and building it is just a fraction of how the product gets made. is just such a small sliver of that as is design. my friend, Kim Goodwin says doing the design isn't the hard part is actually getting the thing built and out launched out into the world. that's the really hard part is being able to navigate the bureaucracy and places of all these large Nik (38:23) you Dan (38:31) companies in order to convince them this is a great idea, we should go ahead and do this. and that's something that AI is not going to be able to replicate anytime soon. Now it can certainly help you, know, stories and do those kinds of things, but it is not going to help you be able to wrangle your organization's politics anytime soon. Nik (38:53) Maybe. I don't know. mean, there are, there is, yeah, that's, no, I'm serious. There's people who were thinking about this kind of stuff where these agents are, potentially understanding how systems and organizations work. They're understanding how to be persuasive. I mean, in theory, you could, you could actually explore ideas that do that. Dan (38:59) Okay, okay. Yes, could create a chatbot that pretend you're our CEO and I'm going to give you this idea and you're going to first critique it and then you're going to help me pitch it to him and tailor it to his or her, sensibilities. sure, sure. I believe that. But I don't know. Humans are so messy that. I find that hard to believe in the near term, not to say certainly that AI can't can't help manipulate and persuade, because we've certainly seen cases of that abundant cases of that. in the last five years. Nik (39:56) Yeah, I'm not sure if this is stuff that's like, you know, Claude Cowork helped figure out how to launch a new product within my very large organization. I'm, I think, yeah, you're probably right. It's not coming particularly soon, but I actually think that, there's probably people out there who are really good at this, who are probably trying to figure out ways in which they can leverage AI to basically do what they do as a person better. And basically somehow, Dan (40:06) Right. Nik (40:25) capture their knowledge, try to embed that within a system, and then try to utilize that system to basically automate aspects of their work. I could totally see that someone's probably working on that. I think it's gonna be hard. I think that there's probably a lot more, you know, almost, there's probably technical advances that need to be done. And then the other thing too is also there needs to be ways in which to get the data. I mean, this is almost the thing where people are, say like, hey, non-recorded meetings. for all kinds of product decisions. And at that point, now the computational systems have no way to get it into their context. Dan (41:02) You can imagine extremely savvy, diabolically effective PMs bugging every conference room for better leverage. God, I hate this timeline. Oh my God. Nik (41:17) ⁓ jeez. Dan (41:24) There's so much potential for so many incredible things and so much potential for just terrible things. And we used to say this at Twitter all the time. We said it's not a superpower unless it can also be used for evil. But boy, some days, some days and maybe we'll cut this. Yeah, we don't depress our listeners. Nik (41:45) Yeah. All right, well, how about... Dan (41:48) ⁓ This is the end of the episode anyway, so let's see if any of make it this far. And on that chipper note, We'll see you next week for another episode of AI and Design.