Dan (00:05) Welcome to our 10th episode of AI and Design, where we explore how artificial intelligence is reshaping the world of design. I'm Dan Saffer. Nik (00:15) And I'm Nick Martelaro, and we're faculty here at Carnegie Mellon's Human-Computer Interaction Institute. Each week, we break down the latest AI developments, dive deep into topics that matter to designers, and talk with fascinating guests who are right at the intersection of these fields. Dan (00:29) Whether you're a designer working with AI or an AI practitioner interested in design, we're glad you're here. Nik (00:35) Today we're talking about Adobe and Canva adding AI to their image editors. Dan (00:40) Are you an AI advocate in your organization? Well, surprise, you might be one of its worst employees. Nik (00:47) and Dan interviews Louise McFadden, author of the upcoming book, Designing AI Interfaces. But first, let's talk about magic layers. Dan, what's the magic here? Dan (00:57) So the idea behind magic layers and what we're going to talk about with Adobe is that you can finally do a lot more interesting stuff with very basic images. So you spend 20 minutes making a prompt to make a beautiful image and the AI spits out an image and you're like, God, the font is off or like, I don't like that background or wow, that's a really ugly thing in the image. Can I remove just that one little piece? And so normally you would have to kind of go back and reprompt and hope that it gets it right. And that has often been the state of AI generated design. But Canva is now made with new thing called magic layers. And here's basically how that works. So you drop any kind of ping or JPEG and the tool kind of reverse engineers it and it pulls out texts or images or backgrounds and then shapes them into layers that you can actually directly manipulate. that locked untouchable JPEG becomes something like a vector file that you can actually start to move things around, swap colors, fix the tagline. You don't have to start over with a whole nother prompt. And so it's a really, really nice feature for people that are doing any kind of... image manipulation. And we talked about this a little bit last week when there was a rumor that Nano Banana was going to be introducing this kind of direct manipulation. But it seems like Canva has actually beat them to the punch. Nik (02:47) Yeah, I think that this is pretty exciting because we've talked about this on the show before how having more interfaces and ways to interact with things in a more direct manipulation manner is going to be better for this kind of workflow. Or at least this is much more the sort of designerly workflow that we're used to. And Adobe is also doing this, right, within their Firefly generative image tool set. thus also in Photoshop, you now can draw in items that you want to add. You can start to use markup line annotations to remove items or to move items. I mean, this was something when I saw it, I was like, yeah, that feels right. Like the idea of I'm just going to red line this image. And in the demo that they have on the website, which we'll link to, you draw a goofy little pencil sketch flower. And then you. X out the chain link fence and you ask for some other things to be moved. I would argue, this is the kind of workflow that we're used to. This is the kind of thing that you would expect actually your, a manager or someone to actually, when they're giving you feedback on an image to basically mark it up with a pen and then, say, hey, this is what I think you should change and then give it back. And then you would have, Originally gone back into Photoshop and made those changes now basically you're the one doing that marking it up Asking for a few things with some light prompting and then you get your new images Dan (04:11) You know, we don't give Adobe enough credit as an industry, but really Firefly has been way out ahead of almost all the other image editors out there when it comes to mixing direct manipulation and prompts and being able to adjust the images based on sliders. and just being able to play with the image and figuring out how much autonomy that you should give the AI system when generating images or refining images is... Firefly has been like out ahead of pretty much everyone else on this. And so I'm glad to see that it's getting even more and more love and... Yeah, as someone who like grew up on Photoshop, I'm also glad to see that AI assistant is coming on to there because that tool is a classic. It's a stone cold classic. And I'm loving to see AI being used in places like this that are really just part of a workflow. You're not like. You're not going out and prompting, you're making stuff right in the tool that you already use. And I think that that application of AI into all these creative fields is a really great move forward. Another thing that we didn't put on the list here to talk about, but I want to mention is Ben Affleck's AI company, which is all about using AI in filmmaking, particularly in post-production is a similar thing to this where you've got AI being used as part of your creative suite to help you do the things that you already are doing that really enhance your workflow and aren't forcing you to go into Claude or Gemini or, one of the big AI tools that it is working within your workflow. It's a specialized tool just for doing this kind of work. Nik (06:35) Yeah, this idea of AI enhancement over a totally different, say, AI workflow, I think is a really interesting space that companies are now trying to figure out. Yeah, one of the things that I like seeing here is a different model of AI enhancement in the products that we already have and arguably the workflows and the tools that we already have, as opposed to trying to kind of remake and disrupt the workflow. Now I like the fact that there's explorations on both ends of the spectrum here, but I do think that what Adobe is doing here is leveraging the fact that it has a long standing platform, a long standing set of interface and interaction primitives that it can build on in a tool that's, you know, so many people use and so many people grew up using that they can really just add onto this. I do want to say for full disclosure that my lab has been supported in the past by Adobe, although we don't, while we do research oriented stuff, we don't work directly on the products. But I like seeing, these the these explorations. One of the other things too is that the the way that Adobe is doing stuff is they're also providing access to multiple models. They're not it's not just the Adobe models, although they are developing their own internal models to do image generation and manipulation, but they're also providing ways to interact with outside models with Google, Nano, Banana. with OpenAI's image generation, Runway, Black Forest Labs. And so they're providing artists and creatives with the ability to work within their tool, work within their own workflow, but actually get access to these other things rather than having to sort of break and change their workflow. And I think that this is gonna be a big question, which is, that a winning model for a lot? of pieces of software, our legacy software, can they keep adding AI enhancements and basically keep their sort of crowns? Or are we gonna start seeing new disruptive workflows emerging that really displace some of where these tools fit in people's workflows today? Dan (08:45) Right, is Nano Banana going to be able to take on Photoshop? I guess we'll see, I think that Canva and Adobe sometimes get left out of the discussion around AI and clearly there are people there that are doing some really interesting work in bringing AI into their products in a way that doesn't feel forced. doesn't feel like they're sprinkling AI on like, like Salt Bay. So I really, I really want to give a shout out to the unsung heroes of the AI sphere Nik (09:27) Yeah. And one of the things here, you know, this aligns really well with the class that you and I teach on and off, design of AI products and services, right? Because in a way it's looking at what is the core value proposition of AI finding where that is going to provide direct value to the user, to the customer, right? Not trying to make it this is about AI. It's no, this is about doing a job better. This is about actually completing a task more effectively, saving you time, saving you work, making your work more effective. And AI is the way in which you get there. It's an amazing technology that you add in to get there. Dan (10:09) Right. And isn't that what design is supposed to be about? Right. It's no one really cares about AI, the technology. They care about doing what they want to do and having that be an efficient, pleasurable, interesting experience that's better than it needs to be. I think we get so wrapped up in AI, AI, AI, and we have a podcast here called AI and Design, so we're part of the problem. But I think we forget that AI is a technology, it's a material, and what we should be doing with it is finding the best places to utilize it and not have it be this big thing. It should just be there and should work and should help us do the things that we want to do and be valuable. And I think that's, that's a noble calling right there. Nik (11:06) So this next story is an article written by George Sivulka of Hebbia, which is a financial AI company. And it's titled Institutional AI versus Individual AI. And the gist of this is that individuals using AI are becoming more productive, but you don't actually change the course of your institution, your company, with just a bunch of people becoming more productive. Actually, in some ways that could hurt you if you don't have the systems in place to leverage that productivity. And what you really need is institutional changes. The metaphor that gets used here is actually the introduction of electric power and the changes to manufacturing. Whereas just because you can actually have electric machines doesn't make you a better company. It's actually changing the floor plan and the manufacturing line. That's really what unlocked a lot of the potential that was there in electric power. I actually agree with this. I think this is a really interesting take of, okay, everyone running around with AI is actually maybe not the best thing if there's no institutional backing behind it. But yeah, this idea that maybe you're kind of chaotic when everyone is running around with their own workflows, they're using potentially their own AI models in their own different ways as opposed to sort of a coordinated strategy around how a company uses AI and the coordination that comes from that. Yeah, that may be a better thing and that's really what companies need to be thinking about today. Dan (12:35) Yeah, I hadn't really thought about this individual versus institutional. think there is a little bit of a false dichotomy going on here in that. Well, sure. You may want to have a coordinated use of AI amongst your teams, but it doesn't also mean that I can't have my own. tooling to help me do my individual job as long as it is part of a coordinated effort because having a whole bunch of people running around and just building random things It does not make your company more effective. It does not make your company more productive. And maybe this is really the argument for our product manager friends that that's what we really need here is actually some of that coordination between everyone, which was something that we didn't have. Back in Ye Olden times where you had engineers doing their thing, you had designers doing their thing, it was good if you could coordinate between each other, but sometimes there was just that missing piece, which is where the product manager stepped in. And now because of AI, you can have. engineers doing a whole bunch of stuff. You can have designers doing a whole bunch of stuff. You can even have product managers doing a whole bunch of stuff and they can all be running in different directions. And that's not how you get very far at all. You get a whole bunch of different pieces of things that may not add up into a cohesive whole. Nik (14:11) Yeah, Dan, I like the fact that you're mentioning our product manager friends because I feel like I've heard stories online where companies are having product managers do more design oriented tasks because they kind of can, right? You can prompt a model, you can have it generate mockups, you can have it create prototype code, and you can test out ideas where if your role was to work with, your user research team, then coordinate and prioritize, okay, what's a feature list that we want to work on? You you can already start to do some of that, but actually, you're making me think now, given your analysis of this is like, actually, no, we need our product manager friends to be doing the product management stuff and actually probably thinking about how AI gets utilized for coordination, communication. Right? How are teams actually using it effectively? Like maybe that's a role in which they might need to step into as opposed to, we can reduce the number of designers and engineers because the product manager can now do some of that, which is where I feel like some, sometimes people think like, that's what we'll do. Cause you know, they still know they need those, those managers to, to coordinate the larger vision of things. but yeah, now I'm thinking like, actually Everyone is probably probably still has their role But yeah, there needs to maybe be a bit more thinking around how the organization uses AI and the product manager May be a great person to to help figure that out Dan (15:36) Right. This past week, my old colleague, Peter Merholz put out an article that was all about, there's always been overlap on job roles, deal with that. But his argument was like, lean into those skills, and that's where you're going to have some differentiation. And I think the product management job is still a skill of coordination. And we don't talk a lot about that. We talk about all these individual tools and people being empowered to run off and be really effective. But yeah, we aren't talking as much, at least not on this podcast and in the circles I run in, about using AI for better coordination. I think that is an untapped area, least that I know of. I I'm sure there are people at Slack or some of these productivity places that are really working on that, JIRA and stuff like that. But how do we help with the coordination piece of product development? Nik (16:47) Yeah, I'm actually thinking now there's maybe a place for like an institutional AI designer in a way, kind of a systems designer. so someone who's really thinking about how to apply this technology at the institution level, at the broad level. And that's maybe not something that we've seen a lot of, and it's probably going to be a forming role that you need someone who probably understands a lot about the organization. who then also has a deeper understanding of how the AI works and can be used. And right now, I don't know, there's, don't know how many people are in that, that position, because it's interesting that what we do see a lot of from the individual AI is very experienced people being able to utilize the AI very effectively to, improve their productivity. We're seeing a lot of junior people learning how to work with this and basically forming completely new processes. but yeah, actually this sort of understanding and ability to kind of apply these things at the, at a company level, at an organizational level. I don't know if I've seen as much of, and yeah, I if we'll start to see more of that. Dan (17:48) Well, I wonder if these roles don't live within product development as much. So they wouldn't live under like a VP of product or chief product officer. They would instead live under like the chief information officer or the chief operating officer, like in those departments that are more like about things like managing. data and workflows and human the people that, ⁓ install Slack in your organization, for instance, those are the people that are really concerned about doing this kind of thing. I wonder if there's some interesting hybrid between what we would think of in our world as, as product managers and On the other side, something that is like an operations manager or the COO, like is there something there in some of these big companies? And we may be, mean, perhaps this is an unknown area to us and we'll get a lot of angry emails telling us like, you don't know what's going on. There's a whole bunch of this already happening in coordination stuff. Maybe that's true and we'd love to hear from all of you. So like, subscribe and share everyone and send us some feedback because yeah, we'd love to hear when we're wrong. Well, sometimes. Nik (19:15) So now Dan I'm quite excited to hear Your interview with Louise I'm excited about this new book. I think that we're gonna start seeing a lot more knowledge coming out and being codified Around designing AI designing with AI. So I'm excited for that. So yeah Dan, take us into the interview. Dan (19:34) We are now live we're actually not live. We're just we're recording. Yes, we are on air. Not live on air, just on air. Okay. Great. I am here. Not live but on air with Louise McFadden. Louise Macfadyen (19:37) you on air. Dan (19:49) who is the author of the upcoming book, Designing AI Interfaces. Louise, welcome to the AI and Design Podcast. Louise Macfadyen (19:58) Thank you so much for having me, Dan, Dan (20:00) So I guess maybe we should start and explain what the book is and why you wrote it. Louise Macfadyen (20:05) Absolutely. You know, I've talked about this a lot about the book itself. It really came from a desire for the book itself to exist, regardless of whether or not I was the writer. Because as a practitioner working in AI, you know, I started out working on TensorFlow, which was an early open source AI model from Google back in 2018, I want to say. You know, back then, wasn't necessarily LLMs as the primary AI driver. It was many different things. It was sort of like posture detection. It was analyzing just large data sets of images. Dan (20:36) Yeah, I often tell my students if we were taking this class 10 years ago, we'd be talking a lot about computer vision right now. Louise Macfadyen (20:43) Right, right, which was a progenitor field and kind of gets forgotten when we think about large language models. But in the time since I had done that work, I paid a little bit of attention to what had changed in the field. For example, I was not looking quite so much at ⁓ image detection or emotion classification or anything like that, although those still are relevant, but to what had remained consistent in terms of large language models and the way we adopt them. Because when I first worked on them, Dan (20:49) Hmm? Louise Macfadyen (21:09) was incredibly difficult to think about how you would stack up a user experience, the prioritization, what you need to explain to a user about what's happening under the hood as you're working, or really any of the prioritization we would normally do when we were designing. And so I began working on AI again in a much more concentrated way when I joined Microsoft a couple of years ago. I'd worked on design systems when I was at Google and I was bringing some of that knowledge to Microsoft to build sort co-pilot experiences with them. And it occurred to me that the time is really ripe for us to begin to have proper discussions about how users are experiencing AI, not just the actual technical parts, but what knowledge do users bring, what expectations, how are those answered and. How can you even begin to categorize and build a framework around AI itself? And I can honestly tell you, I just searched for that book high and low, and I assumed it existed. And of course I went to Rosenfeld and I went to O'Reilly and looked for all of the examples I could find. But that thinking hadn't quite come out. And I can totally rationalize why we all can, that it's a very quickly moving field. It's difficult to want to put one's hand up or put your head above the parapet and make a declaration about where we're at. But I did begin to feel that from what I'd seen in my previous work, there is a clear structure around AI that we can probably anticipate will stick around. And that structure in a very basic way is that we are always going to have an input. We're always going to have some form of computation visualized potentially as latency. And then we're always going to have an output. And so when we break it down that way, we can also think about how familiar we are with inputs, which, you know, you and I communicated today to organize this through an input, through a text message. Computation, we're familiar with waiting, though not waiting this long, which of course is a different contingency to manage. And then outputs are something we're practiced and experiencing, but these operate very differently. And we know that there's risk in a very different way than we previously had to manage. And so, yeah, when I came to write the book or came to beg O'Reilly to let me write the book, ⁓ this was kind of the case I made that As we begin to make more sophisticated interfaces, this can be our origin point. It's not declarative about how to design, but we can certainly think in this framework. Dan (23:24) you have this input computation output framework. And I think we think a lot about, or at least I know I do that input and output piece of that. Can you talk a little about the middle piece computation? Because I think that's where things get all crazy in the AI world. Louise Macfadyen (23:43) Right. Well, we talk so much about the black box in the AI field, and there's no actual part of that strategy or that framework that the black box exclusively exists within, right? Like we're looking at a very confusing interface sometimes when we're forming an input that could be interpreted as a black box output. You're not sure where it came from, but the most confusing and mysterious part is what's happening in the middle because how we're choosing as designers to articulate what is happening under the hood is a reflection of the fact that engineers don't necessarily know what's happening under the hood. Like the way that we have arrived at this very sophisticated technology is sort of a marriage of very complex linguistic rules and enormous amounts of data. you know, the papers that were written that helped guide us to this point in this technology themselves make it very clear that we understand some of it. We don't totally understand all of it. And so that's why the human experience layer is so important because articulating what we can know and explaining to our users is very, very challenging. And then you layer on top of that the fact that it's computationally intensive. These things take a lot longer than we are familiar with waiting. I give some examples in the book of how we you know, we've become adapted to waiting for a website, but we won't wait that long. You know, we'll wait a couple of seconds and then we'll just give up with what we're doing. And how we articulate as designers also why you're waiting, how long you should be waiting for, if we're able to get the information to tell you what window of time you should expect waiting for. These have all changed in the AI era because, you know, until recently, when we started having more reasoning models of structure, there was really no defined way to let users know what is happening. An image model can't give you like a little glimpse of the image it's making because it's putting together the image over time, in some cases rationalizing it against another model to see if it's hitting the mark. And so users get very shut out of that process and figuring out how you can tell that story, even potentially liaising with engineers to work out if there's data points or working out what type of user you have and what information might be useful to them or patients or a notification system ends up being a really good value add to the user experience. Dan (25:54) Yeah, know that Jacob Nielsen has called this slow AI because there is this kind of latency that happens. And we lose track of our train of thought. We lose track of what we're supposed to be evaluating. All these kinds of things make ⁓ a behavioral and a communication problem. So what do you think? Do you think that designers should, latency different? Is there something that we should be doing? Something that maybe is signaling that anything about the system's competence, for instance, or anything like that? Louise Macfadyen (26:30) Well, I think in some cases we can understand and anticipate latency, right? Because we have model routing. And so we know if a larger query, if it's something that happened on device, it's gonna be very quick. If it's a larger query, it's been routing differently, you can send that message. But it's useful even for organizations to begin thinking about how we display that. There's a pattern in OpenAI where they are in chatgpt where they'll give users the option to use a smaller model or a cheaper model if... it's taking too long. It's called like an escape hatch. So it's useful to think about how long a query might be and also how much that ladders into the sophistication of the answer. Thinking about deep research that came out last year, like that is not useful for every single type of query. And assuming that whether query might actually require it is something that you want to give users the most insight into possible. Now I will say latency also has a few different implications for just user experience in terms of positioning. The posture that the user is bringing, whether they want something answered right now or they are willing to wait is really relevant and also how capable the tool is of filling the gap. So the example give is deep reasoning, because this is a really useful, I think of deep reasoning as a latency pattern. It's also very useful for validating some of the problems we have with hallucinations, but in essence, it takes and sort of divvies out into tasks, a query, and in many ways, that helps with user understanding, because while that's not always what's happening during a latency event, it is indicative to the user and can prime them for better knowledge about the way that their query could be interpreted in the future. Because we're also very lazy as humans, by nature. And our queries tend to get shorter. I mean, sometimes I'm shocked by the horrible grammar I use in my AI queries. And so being able to interpret what is happening under the hood and how your query is interpreted makes you a user, which is something that once the output is here, we don't think quite as much about. I make the point in the book all the time, we've been Googling for 25 years. We have a really sophisticated understanding of how to do that. People append Reddit when they know they wanna get like, actually use content. We don't have that sophistication at all with models and also models change so frequently that we don't know whether we're actually going to get the model to stop using the end dash when we ask it to. Dan (28:52) Right. Well, it's funny because I've been just noticing in like the last week, maybe that both Claude and think Gemini are both giving you a time counter for how long queries are taking now. And I hadn't noticed that before. And I'm like, huh, interesting. They are starting to provide some feedback that at least something's happening. And yeah, I use, I use a spiral sometimes, which is a tool. for writing built on top of Claude. And it is interesting to watch sometimes during processing, it'll tell you things like, now that I understand this thing, I'm passing it off to the writer portion to do the writing part. And that I find is actually pretty helpful because you start to get a mental model of how the computation part is working that I just didn't have before Louise Macfadyen (29:46) Well, absolutely. I think of them kind of like sheepdogs sometimes where it's like all they want to do is do the job as well as they can for you. They're trying to fill this sentence or complete this task as the model has been trained to do or trained to do is wrong to say, but in a way that it's familiar with. And it's constantly coming up for me that I want to make the model do less. I'm like, okay, let me constrain this request more. And seeing deep reasoning has been the foundation for that for me because I'll use cloud code and I'll see a bunch of tasks that didn't really need to occur happen. And it makes me fence in much more carefully the type of queries next time. I think that the development of planning modes and things like that is also a version of this where we're getting to a point where we also want to be a little more thoughtful about how much time we're spending and we're getting sort of fenced in by costs eventually. And so it's a good feeling to start being wiser about the type of queries you use and the extent of the competition. Dan (30:41) I think one of the missing places for designers is in that planning stage, just making it easier to actually adjust and change the plan because so often when doing things like deep research, it's like, I don't need that. Like, I just want this little piece. don't need a 20 page research report on this. I just need you to think really clearly about these two things. And if it's three pages, I'm going to be super happy. It was 20 pages and it takes five to 10 minutes to do. I'm not going to be happy. And you're right. The cost of it is like, Ooh God, both environmental and like token costs. It's like, Ooh God, I don't, don't, don't want to do that. Louise Macfadyen (31:26) I mean, this is a big opportunity, I think, for designers. One of the things I was working on when was at Microsoft was trying to help build a logic around when something should just be in the text, the model asks you to clarify your question or is giving you information, or when it does get conferred up into UI. Like, what's the strategy there? And one of the things I came back to time and time again is it's very useful when models want to clarify with you after your initial input, but how we display that is very important. And like, I'm beginning to see more and more trends within like Claude, within OpenAI, but chat, JVT, where they do give you almost like a quiz or an option to like select in and out of what you want to get out of the query. And for me, I think that designers have great opportunity there to start to bring in a little more of the user expertise that we bring as our field as the voice of the user. Dan (32:13) Just having it think less about efficiency and more about what do you really need to do? putting some of that judgment about what the user is actually asking for, I think is a good practice Louise Macfadyen (32:27) Yeah. Well, right, and it's also about how wisely we can help the user use their time. Because it's very impressive to have a model write an entire wall of text. But I often just go back to in-person communication, interpersonal communication for this. And I think about, I'm chatting to five of my girlfriends and we're trying to figure out an event that we're having next. And we could do that by chatting, but we can also just set up a poll and we can arrange all of our information in one place that way with. two buttons. And to me, that's exactly the logic that we want to bring into AI where we're like, cool, very impressive to have just an entire wall of text. But reading a wall of text, especially when it's AI generated, is not as useful as just having the opportunity to make a decision or see data laid out in the way that we're most familiar with seeing it. Dan (33:14) I feel like we've moved beyond computation now to output because I wanted I wanted to know and I I want to talk about output too. Because a major theme in Chapter five that feels like we're dancing around here is that you say that the outputs never just an answer. It's a designed artifact. So what do mean by that? And how can we as designers better utilize things like Louise Macfadyen (33:18) ⁓ sorry. Dan (33:39) formatting or hierarchy or make that raw AI output much more actionable and scannable for users. Louise Macfadyen (33:47) So in so many ways, the output is what you came to get in so many of your AI experiences. You've achieved something, the model has created something with you, and that's really useful. However, it's totally an object until you take action on it, right? Until you share the document that was created, if it's a very sophisticated output, or if you just go and begin to adapt it, you begin to think about it. You had a trip plan, now you start booking or enabling an agent to book the trip. Like you have to make an option as a user to begin to take that second step. And so as designers, our opportunity is to begin to design that surface for the next option and enable it, which I think is interesting for us to consider because we also aren't dealing with a one step process. Users are often taking an output, reconfiguring it, sometimes even going into different platforms and going back and forth. And so figuring out how to... have continuous integrity for that output or whether that's even useful is very interesting, like breaking down and fracturing an output and providing a UI that allows for that. Really fascinating. I'll also say, for me, I consider nearly all AI conversation to be a method for speaking to another human. Most of what we're creating with AI is actually for human consumption and of small minority of communication is just for personal interests, like people have a high relationships or they're doing introspection with a chat bot for therapy or things like that. But really, most of the time, we're actually creating an artifact that we want to use that's going to in some way be seen by another human. And I think identifying that is very useful. In the book, I list out, ⁓ this is probably old research now, but Google made some categories of their eight to 10, most frequent types of search. And they're kind of charming to look at because you'll identify yourself in all of them. Dental pain, I'm trying to figure out what's going on. Or I just didn't want to type in an entire URL, so I did this. I'm looking at a news piece. But I make the argument that once we begin to pull in the data about the types of ⁓ search users are doing, we also begin to have the opportunity to give much more useful forward-looking actions on an output. And because we're quite familiar right now with AI being applicable to all people, know, ⁓ chat GPT and Claude, they're just, every user can use them. We're not so sophisticated in our knowledge, or I shouldn't say sophisticated, we're just not as familiar with constrained tools that are for your specific type of user, your specific type of query. But yeah, I believe that that's probably going to make AI experiences. much more beneficial once the route by which we're going to take that secondary action is clearer. But it does come down to getting intent. These big tools that are for every single type of user, it is so much harder to have any intent from the outset that you can assume about that user. Dan (36:38) And of course, every output is going to have a confidence score, right? Louise Macfadyen (36:42) Right, right. It's a charming thing that came up so many times when I was writing this book, because of course, you're writing a book and you're writing a book at AI. So naturally, you're constantly feeding the models bits about your book. And I would go in and I would say to either actually I tested lots of different models when I was doing this, because it was really useful for me to get a different sort of experience across. across, I had so many different services to unsubscribe from after I finished this book. But what I found time and time again is if I asked about the top pressing issues for AI UI design, I would get this bit of almost like folk logic that models had ingested about confidence scores. And at first I was like, can I even find examples of these, which was challenging? Dan (37:12) Thank you. Louise Macfadyen (37:30) And then I sort of address the logic of a confidence score, which I get into detail about in the book, but I'll talk briefly about here if it's okay, because it's a real ⁓ problem. The logic doesn't make any sense because I can understand the desire to solve hallucinations by displaying some indicator about whether an answer is right. However, on the surface, if we knew an answer was incorrect, we wouldn't display it. you would need to know the correct answer in order to be able to show the distance between the correct and the incorrect answer to evaluate in any way. And there's no value to showing an incorrect answer. I guess for like evals or something, you might wanna show the distance between what you got and what you should have got. But beyond that, models are not showing correct answers. They're not evaluating something for its correctness in some ways. They're evaluating whether or not the sentence sounds correct. If it's similar to something that's already in the corpus of data. Dan (38:22) Right, does it seem correct that is a completely different thing from being factually correct. Louise Macfadyen (38:29) Right, it just, I wish I could have asked for this when I wrote the book, but they don't, fun fact, I really do not let you choose the animal that goes in the front. But I would have loved to have had a mockingbird because in so many ways it's so similar. Something that's just repeating back to you, something that doesn't actually have sentient knowledge underneath is a very good analogy for what we're seeing when we're asking a model and it's telling us what it feels. It's not doing a bad Google search, it's just imagining. Dan (38:40) Mmm. Louise Macfadyen (38:54) what the answers to that question might be and giving it to you. I wanna start a website, by the way, for the best hallucinations. They're getting a lot better now, so it's not as funny anymore, but. Dan (39:04) Yeah, I really do miss the six-fingered hands and the weird AI text and stuff like that. That's kind of a sad loss. I think we're going to see in the future a resurgence of that as a kind retro AI style. It'll be a new artistic movement, I think. Louise Macfadyen (39:11) Yep. Wistful, yeah. I got a very confident paper shown to me about HCI. I forget exactly what the query was, but it was to do with HCI. And I was very intrigued by it. I thought it was really relevant. And then I opened it up and it has hallucinated, but it was a real archive link. And it was to study on the manufacturing of, the manufacturing of Argentina's condensed milk industry. And, It was easily the best hallucination. Dan (39:55) Very relevant, very relevant, I'm sure. Louise Macfadyen (39:58) was good stuff. Dan (39:58) shifting here a little bit? we talk about the shift to canvas interfaces? you're talking a little bit about the book about the shift from linear chat to more spatial canvas and environments. And we're seeing a lot of that pretty recently all come up. So how do you see that changing to this kind of multimodal workspace and what does that change for users relationship to the AI compared to a much more transient conversational thread. Louise Macfadyen (40:30) I think we felt so comfortable with text inputs, partially because it's how we speak to other humans, but also because it's a discreet way for us to work outside of a riskier surface. If you're creating something, you might want to have AI separated, an AI tool separated so that you're not making changes on what you're into the live product that you're keeping. As we begin, I think, to see AI layered into more more products, what has been referred to as on-canvas is in many cases still just a text box, but it's in that product. And then we're also seeing, I think, a bit of an increase in just like, I've been referring to this as like the sparkle emoji, like AI, but kind of crowbarred into the product. And for me, I think that we'll see. Dan (41:11) Sparkle sickness I've heard people call it. So yes, I love it. Louise Macfadyen (41:15) Sparkle sickness, that's great term. ⁓ I think that AI doesn't come out into a neutral zone at all. People have very strong feelings about these products. And one of the places that we are least successful as designers is when we are just putting AI on top of our product in any case. And that's much more risky with Canvas products than it is with a chat where you're off in your own interface. Where I think we have a great opportunity is to balance the knowledge we have of our users, which long standing products that were not previously AI enabled have the opportunity to do this, even much more than AI products do. By looking at the patterns that their users exhibit in achieving tasks and understanding how they do it, and then slotting in the capabilities that a model has. And by, you know, this is sort of, backdated to the field of economics, economics ergonomics, when we would look at how people's bodies moved and allow them to do that without harm. Doing the same thing is something that should feel so comfortable for us as designers, like working out the passage, the journey of a task being accomplished. But because right now designers sit a little bit on the back foot in terms of how AI decisions get made within like at a product level. They haven't been able to advocate for that, I think. I think returning designers to being the voice of the user is something I would, that would be the dream to come out of this book, honestly, because no one else is better positioned with an organization or really tasked in the same way with working out how a user wants to achieve their goal. And because of the very complex nature of AI, because it's very expensive, and there's a big tip of the spear kind of like argument to be made, that sort of gets like conferred up to PMs and engineers and product leaders. And I think as we begin to enter this slightly riskier space where we already hear so much resistance about just crowbarring AI and on top of our product, like designers taking the time to really see where ⁓ AI is useful within someone's journey. and displaying it at those intervals in different postures, know, automatically perhaps think of it like spell check or dynamically giving them an option or a panel. That's where I think canvas experiences will start to actually feel productive and useful. Dan (43:29) I like to think of it as AI in the loop rather than humans in the loop because where's the best place to put AI into this rather than, well, AI will just do it all for you and humans are just along for the ride. I really hope that as we do move to these more canvas-like Louise Macfadyen (43:35) Hmm. Dan (43:48) experiences that yeah, that we start to get that really nice blend of more human, more direct manipulation. And yeah, I'm not going to beat that drum anymore. This is an interview with you. But one of the things, ⁓ great, great. I love it when people agree with me. ⁓ But this, think, brings up a thing that I'm so glad that you put this in the book. Louise Macfadyen (44:02) No, but I agree with you. You Dan (44:13) And I think it's really important and people aren't talking about this enough, which is the organizational AI maturity. you outlined in the book kind of these three levels of organizational readiness. And I think the farther down you are in that, the more likely you are to just sprinkle AI onto something. And I just want to hear you cook on that for a little bit Louise Macfadyen (44:39) we have had a very successful way of building products for many years where we have a triad. We have the designer, we have the engineer and the PM. Information management in that structure, when we're in like mainly like web, iOS, Android based tools, maybe with some API being pulled in, majority of the information is sitting with the engineer and designers are affiliated with that information, but they're not directly impacting it or having to have some level of understanding of it. Now with how quickly models can change their capabilities shifting, sometimes the capability is chasing something that doesn't line up with what the user wants. we need to have a much more adaptable structure that really uses, as I argue, the PM as kind of the method for gathering and like holding that information and passing it back and forth. And this is part of the argument I was just making a little earlier about more of a of like bottom up approach to AI products in general. Because at the first level, of course, it's so useful for designers to understand the AI tools they're utilizing. But if there's no organizational sophistication around sharing information and actually building with that, then you're going to end up with designers who, in Greenfield's terms, can build something cool, but it doesn't necessarily have a relationship with what engineers will actually have to implement and the sophisticated structures an organization might have. And PMs in that case are kind of still in the same position they've been in before. And as you go up the levels, you begin to see a much more concerted structure and effort around building standards for the information sharing and also, what is going to be consistent about the AI experiences that you build? So for example, logic about when you're going to have content warnings or the nature of the warnings. This is like a difficult thing because we don't know in some cases what risks products are going to open up. There's that great quote about the creation of the ship and vented the shipwreck. so organizations having consistency about. how they want to address the risks they're going to present with their product. You know, that's as much of a product as brand element, you know? And then finally, I think when you're in the top tier, which is challenging to achieve with the way that the tools are changing right now. there's a really harmonious method of communication across the org where you also have a level of prototyping that is not just green fields, but incorporating the product you're using and the data that you have about your users. Because again, like being fully explorative and being enabled in that way is not the most optimum state, being able to build rapidly is. And there's something I actually wish I could add now to the book, which has come up recently, which is also the rate at which we. decide to make changes because once engineering becomes much cheaper, we have to start talking about experience consistency and how much users will be able to withstand change because there's a little bit already of blood in the water for designers in terms of work shifting from being strategic to more like a B changing like a marginal preference, a little bit more of a navigation thing. And working out when that actually provides a concrete benefit versus just a small change is going to be something that designers and PMs need to have really close communication about because that again is a bit of a brand thing. I often find very different experiences when I open certain apps. And I know that they're optimizing for something that I can't necessarily discern, but it does give you a strong feeling that you are a little bit distance as a user from getting a beneficial experience versus just like a theoretically optimized experience from some remote data point. and I can't decide whether or not I think this is a really consequential issue, but it certainly is showing up around like tech debt and internal tooling, where building a lot of internal tools is fantastic. So many organizations are going in this direction where it's almost just anti-SaaS. We can build something ourselves now, but some of that also can. become a bit of a labor to maintain. And if you end up having a product that's pivoting frequently and it's predicated upon sort of unsophisticated internal tooling. Yeah, I think that's something that organizations are really gonna have to model through in the next few years as people start to pull back from SaaS. Dan (48:36) Yeah, there's the developer saying that it's easy to build things. It's maintaining things. That's the real problem. And I think that is definitely true as you start to have all these internal tools that are really targeted for very specific things. And it's like, wait, that thing doesn't exist anymore. At least not like that. So the output is completely wrong. Louise Macfadyen (48:42) Yeah, 100%. Dan (48:58) I wonder if, ⁓ in your organizational AI maturity model, if it's easier to be more mature, the smaller you are, like our small places, just able to reach a maturity faster because they don't have as much Bureaucracy and legacy processes to really work through Louise Macfadyen (49:19) I think naturally smaller organizations that have a greater ability to pivot do better on this occasion, but they also have like a smaller user base, usually if they're smaller. And so it means that they are a little hungrier, a little easier about like shifting with the changing winds because by their very nature, they need to do that. And large organizations are. you know, they have enormous headcount. We're seeing a lot of layoffs as a consequence, which is a real shame because it's been such a fruitful and supportive industry. But at the same time, I will say, as adaptation is beginning, there is so much opportunity that a large organization with a huge amount of staff can have in terms of just research, which is, think, one of the best things we can lead into as AI designers, like what is available, like what do users need, what is the field requiring at this point? And small companies, bless them, usually afford that quality of research. And so we're gonna see probably even more shops open for independent research, which is great. But yeah, have to say, I go with the literature on this one. Big companies are really, really struggling, I think, particularly because... At big companies, think there's not just a status quo, but again, or like the way information has been shared is very historic and comes into change management. And large companies just have so much less capability to pivot and like work is emotional. I think people also feel threatened by the nature of the work itself changing or their work or the area they've controlled for so long. ⁓ being eaten by something else. I've witnessed it myself and I have a lot of sympathy for it, but like resistance to that change because these organizations have been structured the same way for so long. Dan (50:59) I think everyone right now is feeling insecure about their job. If you're working in one of these kinds of knowledge worker fields where all of a sudden AI is like, well, I can do a lot of parts of your job pretty, pretty well, you know, and pretty, pretty fast. And I think it's causing anxiety across the board. ⁓ Louise Macfadyen (51:24) Yeah, but so much of the design job is interpersonal. Like I think that it's been interesting because it's one of the slowest areas of the triad to adapt. Like I encourage designers to start looking into evals and there's so few resources to get designers into evals, but it's very logical for them to be in that space. But designers are a bit of the social layer of work. And I don't say that just because we can potentially be more gregarious by nature. I also mean that... Again, like I said earlier, works emotional, designs very emotional. And how your users experience things, how your leadership decides they want something to exist in the world, Brad, everything else. Like we have a connection to the visual in a way that we don't have as much, partially because it's subjective discipline. know, engineering, works or it doesn't. Design, much more difficult for us to evaluate. And so. For that reason, I see design as weathering the AI storm to a certain degree because we're kind of curating many different parts of the experience, be it like actually the design itself or like how that's presented to a stakeholder and how we begin to like zhuzh up the motion of a product. You know, there's so many different surfaces that still rely on a lot of interpersonal communication that AI is not gonna be able to quite simplify for us so easily. We're kind of the ambassadors for the user experience in a way that I don't think AI. AI is very good at being very flattering. Like the sick of it is crazy, God knows layering in 10 different stakeholders, layering in the engineering requirements and your latency or whatever you're dealing with and the user experience and all of that. there's many jobs. Sorry, design is a very difficult job. Dan (52:56) Right. Louise McFadden, thanks for coming on to AI and Design Podcast. The book is Designing AI Interfaces. When is it out? Louise Macfadyen (53:08) It will be out on April 21st, 2026. Dan (53:11) Wow, very soon, very soon. So pick it up wherever you buy. Fine Tomes. And thanks for being here. Louise Macfadyen (53:17) You Thank you so much for having me down. This was lovely.