Dan (00:05) Welcome to AI and Design, where we explore how artificial intelligence is reshaping the world of design. I'm Dan Saffer. Nik (00:14) and I'm Nik Martelaro. We're both faculty here at Carnegie Mellon's Human-Computer Interaction Institute. Each week, we break down the latest AI developments, deep dive into topics that matter to designers, and talk with fascinating guests who are right at the intersection of these fields. Dan (00:27) Whether you're a designer working with AI or an AI practitioner interested in design, we are glad that you're here. Nik (00:35) On today's episode, we'll be discussing Impeccable, a new set of skills for AI agents that lets you ask for better design without knowing technical design language. Dan (00:43) And a new paper on AI as entertainment, which argues that the real value of AI might be in entertainment and cultural production instead of productivity and cognitive work tasks. Nik (00:57) We'll wrap up by talking about Dan's article on the Four Horsemen of the AI apocalypse, which categorizes four possible futures for AI in 2026. Dan (01:07) Okay, first let's get into this week's top AI and design topic starting with Impeccable. Nick, what is Impeccable? Nik (01:17) So Impeccable is a set of agent skills that aim to help you say commands to your AI coding agent to then get better design. So let me break that down a little bit. So if you're using a coding agent like Cloud Code or Cursor or OpenAI Codex, You can do things like you can vibe code. That's how I'm working a lot today A lot of people are trying this out where you're asking a coding agent to produce something for you Hey, make me a site, make me a card interface with a a little bit of description text The thing about it though is that a lot of the code agents seem to produce what people are kind of referring to as this like AI style Yes Dan (01:54) this the purple on purple look? Nik (01:57) Yes, this is the purple on purple. This is AI purple, AI blue, lots of rounded corners. And it's interesting. It has kind of a vibe to it. It's got a style to it. And for some reason, all the code agents seem to produce similar stuff. it's probably based on the fact that what they've trained on, that's just what they're producing. ⁓ Dan (02:18) is it just because they're trained on dribble or something like that or? Nik (02:22) Yeah, well, a lot of it is like it's trained on the same type of things. So it's basically these sites likely, you know, have scraped a variety of different, web design. information, a lot of code, anything that's on sites that they're sharing code to kind of produce certain design elements. And so, yeah, I think that there's just something where within the models, they've kind of learned this. It's interesting that they all have a similar feel. There's most likely also a little bit of prompt engineering going on in the background for the companies that are producing this, so Claude or Google or OpenAI, right? They've had people who are writing prompts in the background. that say, if you're asked to do front-end interface design, for example, like try to do these things. Actually, I remember on Anthropics, like there's one where it will say like use fancy features, like when you do it in Claude and you're making a website, like it'll actually try to do all these like fun things. Dan (03:11) But the outcome is that it is just making basically this bland AI house style, or maybe it's not even bland. Maybe it's too fancy because there's also problems with things like it not being very accessible, the purple and purple look, for example. Nik (03:28) Yeah, Yeah, I like that kind of what you're saying, like the house style. Um, and so yeah, that's what you kind of get. And, people are taking notice of it. how do you get around that? One way to do it is that you have a design system, you know, you, basically build up a design system yourself. You put a lot of work into that. You provide that. design system, I either code, right? The components, the CSS file to your agent. And then you say, just work with this stuff. That works totally fine. I actually do this all the time. like I'll download a design system and configure it myself and then use that and everything works well. I get the components I want. But if you're just truly going off vibes, like you're starting with something and you're not really working with a system like that, you're not say in a company where you have to be a hold into a system, you get that house style. that AI house style. So this set of tools, these skills is basically, it's a set of files that you kind of give to the agent and then say, you know, colorize this or add delight. And it literally, it's like a command delight. And all you have to do is say slash delight and then hit enter. And then the code agent goes to work doing this. Dan (04:30) And what is the result something that is just a different house style or is it really working with what you already have. Nik (04:38) That's a really good question. mean, I think that it tries to work with what you have. I mean, if you look at their site, and you kind of check out their nice kind of before and after, it seems to be trying to do a decent amount of work to kind of create something that it looks different. it's applying different topography, it's applying different styling, spacing, content layout, colors. And so it can drastically change stuff, but I think it tries to work a little bit depending on what you're doing. You can actually teach it. It has a teach command. where you can say, look at what I have and kind of understand this. So what it's doing is having the code agent go through the files that you have and try to understand what's there already and then work within that, especially if you do have something like a design system. But yeah, it's, so I tried it. actually tried it on our podcast website and I tried to like, colorize it right now. We have a really monochromatic site. Dan (05:06) Hmm. Nik (05:26) it added colors, it tries to use designerly language, like, we're gonna use warm tones. We're going to try to make it punchy, but still professional, because it kind of understood, like, well, this is a podcast, so this needs to be informational. Yeah, it's really... Dan (05:32) Mm-hmm. Mm-hmm. Nik (05:44) It's interesting. And I'm actually curious on your thought. It's about this because in a way to me, it feels like this thing of like, I could just import design. Like in the same way that I like import libraries into code, like this is the idea of like just import good design style. And what does that maybe mean for people? And then what does that also mean for all of the work that we've done as designers up until now? Dan (06:08) Well, I checked out the site too and we should note that neither Nick or I has any affiliation with Impeccable this is not a paid endorsement of Impeccable in any way. So what is interesting about it is that it does have these baked in UX patterns and you can add to this pattern line. And so it will be interesting to see if something like Impeccable gets added everywhere or is this going to become kind of a central repository for all these really great UX patterns I'll be interested to see like what happens. I'm surprised actually that Figma doesn't have something like this. And maybe because they're, they're still playing catch up with Figma make, and maybe they will eventually build something like this into their tools would be really interesting. Or maybe just they will use Impeccable or something similar. Nik (07:03) I was going to say on the thing about Figma is that... I wonder if to, they probably do have something like this in the background that maybe just isn't exposed to the user. The cool thing about impeccable is that it's basically, this is open source. Like you download the files and then the, the commands and the skills, those are the things that basically the agent uses. These are markdown files. So these are just text, human readable text, and you can go in, you could edit these things. Actually, one of the things that I think is kind of interesting is like you said, there's all the Dan (07:26) Mmm. Nik (07:32) these, you know, kind of what we might argue are like pretty good principles of, web and interface interaction design here. And you can almost learn. If you don't even know a lot about design, and I can totally imagine this being really useful for indie developers who don't have a design background or trying to just get into this, trying to build their own applications, you can start to learn about all these interesting patterns from reading these. Dan (07:59) Yeah, it would be cool if they did some kind of collaboration with something like the interaction design foundation or something like that where they already have these patterns are already kind of out there. Could they import some of them in, or could there be some kind of, team that's working to bring those into the tool? Now, my question is, I Why would you use something like impeccable instead of just prompting? Is it just that it does have these things baked in so I don't have to type everything like, remember to make things accessible or those kinds of things? Nik (08:39) Yeah, I mean, the idea here with the skills is that you give the AI agents sort of these new skills, these new capabilities. And yeah, you basically can just type in, know, slash colorize, slash animate. So for example, like if I want to add a bunch of micro interactions to my website, which is actually something that's not, you know, it's not, it's not trivial to actually do if I want to do it. And of course, as you know, to do it well. Dan (09:01) And by the way, thank you for saying micro interactions. get a 50 cents every time that someone says that word. Nik (09:09) But so, I can now just do slash animate. And if you look in the file, it basically has all of these things about how to think about animation and how to think about it. There's a whole section in here on micro interactions. There's a whole section on state transitions, navigation and flow. So what it's doing basically is, yeah, it is in a way, these are all. prompts, but you don't have to think about it as much. So you don't have to know this stuff. So if you didn't know this, right, you wouldn't even know to add this to your prompt. And secondly, there's a lot in here. Like these are, these are not, short files by any means. So for example, the, the animate file is 180 lines of kind of skills and, and commands and, principles. ⁓ Dan (09:51) wow. Nik (09:52) there's a whole one on critique. There's a whole one on delight. know, delight is 307. think there's a decent amount of work that's gone in here to kind of codify like all these skills and they are exactly like you're saying the kinds of things you could write into a prompt, but now you just don't have to think about it. And by, making it a skill, it becomes something that is replicable and it's easy to call up. So the other thing is, is as you go in and let's say you start developing your own additions to this. So for example, Dan, if you wanted to go in and go like, actually I'm going to write the micro Dan (10:08) Mm-hmm. Nik (10:24) actions section. I'm going to really expand this out. This now allows you to save that and now you're not exactly, you're not copying and pasting it every time. You don't have to remember to use it every time. This is just something that's now available and accessible to your coding agent. Dan (10:37) This is like an art director's dream where you get to be like, add more delight to this. Make this color more vivid, slash delight. I could see art directors walk around, slash delight this. Nik (10:50) then that could be, that is in many ways like something that could be, we might see. I, and on the one hand, right. Does this, like replace design? Like if I can just call this set of open source freely available things for my code agent. Now I don't need, a design degree. don't need design skill. just slash delight slash colorize, And so these things, like, I don't know. What does that mean? Dan (11:13) you gotta know what you're missing though, right? You have to be like, something is off. And if you just start randomly putting commands in, And you also have to, even once you say slash delight and it delightifies it, you're still going to be like, that may be a little too much. Or does it execute that? in a way that you would want it to. I still think some of that design judgment needs to happen. Now does that mean that other people can't learn that? Of course not. mean some of the best designers I've ever worked with don't have design backgrounds and kind of ease their way into it and just have learned how to do it. And I think this will just be another tool to help people learn how to do it. Nik (12:00) Yeah, two other thoughts that you've kind of made me think of here. One of them is that these are sort of like, what you're going to get here now is almost this, potentially this new AI style, which is at least grounded in, good principles and, might produce stuff, but it might sort of lead to sort of a new style, which It's kind of like good templates. Like we've had good template systems for a long time. And depending on what you're trying to do, like if you don't know much about design, like you can go, you know, WordPress templates, for example, right? That's great. You could always get great actually professionally designed WordPress templates. And if your goal was like, I'm creating a site to do this thing, I'm not a designer and I don't really have a huge budget. You could design that and you get that. And then great. You move forward with what you're actually trying to do. Dan (12:22) Mm-hmm. Right, or wicks or square space or any of those kinds of ones that you can just kind of grab off the shelf stuff. Nik (12:50) Right, and so the question here, right, it just that now this is sort of the evolution of that, right? Now, what's interesting is the fact that you can go in and start to steer and change into what you're suggesting, like the art director. Actually, the art director who may have taste can start to say, well, no, not like that. Actually, what if we did this? And then you can update. You can actually update and change those files and kind of embed your own style and think it like, that's the thing I think is really interesting here as opposed to like a template. A template, you have to judge and say, I like this or I don't like this. I'll get this or I don't. If you have some skills, you can go and modify the template, but this might allow you to do it in a way which doesn't require maybe as much say technical skill for modifying something. Dan (13:16) Mm-hmm. Right. can you use another AI to help you write the instructions that go into impeccable Nik (13:38) Yeah, this actually, that's an interesting idea. You might be able to do something where you say, Hey, do some impeccable commands, know, slash delight. Okay. I like this. I don't like this. Okay. Another AI or your code agent just says, okay, start to like, listen to what I'm saying. Now try again. Now try again. Okay. I found, Ooh, I like this option. And now write a prompt to. update the delight command because I like this. And so now feed that back in and let's update the delight to be my kind of delight. Yeah, that's totally possible. Dan (13:58) Mm-hmm. Right. And my delight is sparkles everywhere. Nik (14:11) So maybe as a last point on this, one thing that I also thought of as we were talking was for those who don't maybe have as much design background. And if you haven't totally developed your taste and your curation skills yet, I do wonder though, if these kinds of things will be like, this is, it's making a claim of like, this is what good design is, right? These are commands. And so if I say, slash animate, like, that must be what good animation is. Dan (14:37) Mm-hmm. Nik (14:37) Or slash simplify, okay, this is what simple design is. And I wonder if people will just say, yeah, that's what it is. Dan (14:44) It's the old Marshall McLuhan thing right we shape our tools and thereafter our tools shape us Now, I will say, is this any different from designers looking at Dribbble and being like, well, this is what good visual design is. You can learn bad habits anywhere. So I think what's nice about this is that it is open source and you could have, one could imagine competing. competing things about what simplicity is. You could have the Dieter-Rams module that you plug in. is, or the Johnny Ive or ⁓ whatever it is which I think is pretty fascinating, pretty interesting. Nik (15:16) Yeah? I like that. mean, I like that basically. Yeah. You know, the fact that You could put this up and you could basically say, okay, I'm going to create, slash Ives slash roms, right? You can basically create these on your own. I like that. I agree with that. I really think that this idea that these are open, this also can help us as a field, right? To develop, to potentially bring in a conversation. Like, well, what, what do we think that these commands ought to be, or what is my version of that command? Cause the same thing could be done in any of the tools. You can actually ask your AI. I could ask cursor or I could ask Figma make, please simplify my design. And it's then going to work, but I actually don't have access to how it is. trying to steer that simplification, at least with a skill, like impeccable when you can, you can kind of inspect now at the end of the day, right? The AI agents are still somewhat black boxes, but this is at least you have a little bit more of an ability to steer here. And that's, that's actually something I I really like about this. So I'm excited to see where these things go. I'm excited to see if people make competitors. I'm excited to see if people start using Impeccable and how they do it. I I think it's like a cool new development in sort of how to embed more design into the vibe coding tools that we use. Dan (16:30) Okay, so let's talk about our next topic, is AI is entertainment. So this actually came out of a research paper by Cody Comers and Ari Holtzman. Cody Comers at the Alan Turning Institute and Ari Holtzman at the University of Chicago. So this is a research paper and the paper argues that while generative AI right now is being marketed and evaluated all about its level of intelligence and its being able to be efficient and workplace productivity, its most significant impact is likely to be as a medium for entertainment. I was like, if this is true, then should we be designing AI differently? What do you think, Nick? Nik (17:23) Yeah, this is interesting. You when I looked at this, they've got a cool diagram in the paper. which basically lays out a two by two, something designers love, around one axis is harm and benefit, and then the other axis is intelligent or instrumental performance, and then cultural and contextual performance. And they sort of map out on this two by two, the fact that a lot of current ways that we're kind of doing AI stuff around multimodal AI, cognitive science tasks, humanities last exam, automated research, like these are in the benefit. We believe those are going to be a benefit, but they're mostly in the intelligent or instrumental kind of side of things. And then they also list out a variety of sort of harms. And so they talk about, for example, like, because we do things like we set up safety evals or, we have to think about content moderation and then, putting guardrails on things. One of the things they talk about is this idea that most of the harms we talk about are cultural harms. ⁓ But they make an argument and basically put a big question mark in the box in the upper corner of the two by two saying, what about cultural benefit? And I thought that was interesting. I thought that was interesting, this idea that we could... Dan (18:20) Mmm. Nik (18:33) think about, okay, how might we design things to benefit us culturally, benefit us in a sort of entertainment value? And yeah, in regards to how we design for that, I mean, I think you have to, one, you have to start moving away from only thinking about the fact that it can do cultural harm. Like you have to actually shift your mindset. That's the first thing is like, how can we benefit? From there, I mean, guess one of the things that we might need to start doing as designers is maybe looking back to say, okay, well, what are the benefits of entertainment? What are the benefits of cultural production? And how do we recreate that with some of these AI systems? Dan (19:07) I know right now that I mean, there's a ton of people, particularly young people who use AI as this kind of companion who are using AI as entertainment, they talk to it, they chat to it, they have relationships with it. could we better design these AI so that that's a benefit. Nik (19:27) I wonder if this is where you kind of get into the areas of like personality design, which has actually been something that we've been looking at in regards to... Dan (19:32) Hmm? Nik (19:35) AI agents or chatbots, I mean, before sort of the modern conception or the modern kind of tools of AI with LLMs, like we were designing chatbots with other tools and we were thinking about how do we design their personalities? And, I mean, this is where, right, having, folks who are really, Dan (19:43) yeah. Nik (19:52) aware of and skilled in these kinds of narrative crafting and personality, right? This is where you're getting, for example, writers to come in and do stuff. I guess, yeah, to your question, sorry, I guess maybe I'm still kind of like stumbling a little bit on the side. Like, do we need to design the AI differently? the foundational AI of like, we're gonna use language models. Like, don't know if that's gonna change a whole lot, but again, the steering. like, what are you actually, sorry, now that I'm making this connection of the skills, it's gonna be, so if you think about basically, Dan (20:18) You Nik (20:20) Maybe it's this thing of like, you have to basically design a bunch of skills. Just like we were talking about with the design skills in Impeccable, like now you have to have a bunch of personality skills or now you have to have a bunch of basically like social skills or entertainment skills. now it's sort of like the agents can have access to these different skills, but these are more now on the side of cultural production or they're the side of entertainment type skills. I will say, I don't entirely know what all those would be. Dan (20:43) Well, and the authors point to this whole idea of designing for meaning. And I'm guessing that that's kind of tied into that where it is all about, can you make conversations with a chat bot or with some kind of avatar even if it is just storytelling, can you make those more meaningful in some way? And is that like, how do we change? The outputs from AI to emphasize meaning or Is there some layer where we're looking for the the deeper meaning of a chat that is happening? Those kinds of things. they talk also a lot about putting friction and ambiguity into the mix. So rather than it being like, well, this is 100 % the answer of what you're looking for. You know, can you have something that's quirky or isn't always agreeing with you or a chat bot that pushes back on your ideas. Of course, then you run into the problem of like, would a chat bot that makes things harder for people or makes things more challenging for people, would people use it for very long or would it be like, oh God, I don't want to fight with this thing all the time. So I think there's a really interesting fine line to walk there that. could be really cool to explore. Nik (22:08) I can imagine that there are at least some people who wouldn't mind having some kind of agent or an AI if it's going to do something like if they're explicitly saying, I want this to challenge me or I want this to kind of wrestle with ideas with me or prompt me to be more reflective, for example. You know, I think sometimes people are like, no, I just want this to just give me the answer. But actually, maybe that's the difference between sort of designing for cognitive task and designing for entertainment tasks. Like it isn't fun to be told the answer if the idea of an entertainment, entertaining thing is to find the answer. So like in a game, right? A game where like if the AI assistant helped you with a game and said, do this, do this, it might be good and intelligent at playing the game, but it's not fun for you. And so... Dan (22:39) Mm-hmm. No, I mean the whole point of games is to make something that should be easy hard, right? that's where fun is Even as a human being if you have a teammate that's agreeing with you all that time there they're not adding much meaning they're not adding much value Having a creative partner that is an AI that is pushing back or trying to steer you in a certain way or just forcing you to have this friction of like defending your idea or really thinking through your idea. I can see a lot of value in that. Nik (23:19) Yeah, I mean, we're exploring some of those ideas in my group and some of the research, the idea of this sort of partner that works with you, challenges you, prompts you to be more reflective, prompts you to just think through things. And that we're finding can be helpful if people engage with it. to your question, right? This is where then I think the experience design of that interaction becomes important. And so in regards to what can designers do is how do you think about designing for that level of friction, designing for that level of conversation such that people do want to engage with it, that they are finding value in it. that they're able to make meaning through their interaction and through their experience with an AI. Dan (24:00) I think it's an interesting paper and I think it's an interesting way to reframe how we think about AI and think about it differently as entertainment, as a meaning machine versus a productivity machine. Nik (24:23) I think this is what's kind of cool about this paper Right is that it opens up the conversation about that and it sort of gets this idea, you know in people's heads of thinking about what kind of positive benefit if we think about AI creating entertainment how can you have that positive benefit, as opposed to just thinking about how could this potentially harm people. so it opens up, I think, the design space for folks, and it opens up people's thinking about this, which I found interesting. Dan (24:51) And speaking of interesting and provocative, we should talk about my article, the Four Horsemen of the AI Apocalypse. And just as a preamble for this, so I wrote this over the break, going into 2026, what are the different ways that AI is starting to warp and find different places to live? And what are the different tactics that companies that are working with AI, what are they all trying? And so I came up with these four big buckets. Now there's some overlap with them, but I think it's an interesting way to think about the four different possible futures that are gonna happen. And as I say in the article, I think all these futures are going to happen all at once, all the time. And I think going forward that all these companies are going to be trying all of these things and seeing what works, seeing what does it. And so the four horsemen, the first one is all about AI as glue and the idea here was that this is where AI is disappearing into your existing tools. You're not opening up an AI app, just all of your apps are getting smarter. And this is kind of the promise that Apple intelligence made, last year, and they're releasing all these foundation models frameworks. which give developers access to on-device AI. So all these free inferences, offline capability, privacy, AI isn't something that you go to, it just makes your apps better. the first horseman is all about these AI features that are the ones that you forget are there. They are just... software working in the background. Nik (26:52) Yeah, I think this is a really interesting idea because it speaks to the, good design is invisible, right? And, good design falls away. And this is the kind of thing where I can imagine there's going to be a lot of different products that try to approach this. One thing about your article that I like is the fact that you point out on device models. Because one of the interesting things about adding these types of features to software is if you are using cloud-based AI providers, every time that feature runs, it costs money and it costs money that someone has to pay, whether it's the user or the company that is producing the product. And so that all now needs to be accounted for. You have to basically have traceability on it. So every time it gets run, like who owes who. How much are we using? And the fact that you bring in this idea that actually we might want to start thinking about these on device AI is, think, important. Because when it runs on your device, arguably it costs something in regards to the energy on your device, but it's much more like features in software as we have known it. Like you're not thinking about a cloud call. You're not thinking about this stuff. And it can just run as much as you want. It doesn't matter. It just benefits the user. Dan (28:01) The other problem with this is latency. My Alexa, because I've now updated to the new Alexa Pro, has a huge latency problem. And so every request, because it's going to the cloud, takes two or three seconds at a minimum to come back. And so you don't get that instant response where you're saying, hey, set a timer for three minutes. And it's like three or four seconds later, it's like, okay, I've set a timer this call, this, this non responsiveness, I think is, is also a great reason to push more things onto devices for sure. The second of the four horsemen, the second horseman is all about a specialized feature and this one is AI that knows your work. So instead of having one AI that's moderately good at everything, you get AIs that are exceptional at very specific things. Harvey is an example of this. It's building a legal AI that understands things like citation formats and contract pitfalls. It's not a chatbot with legal prompts. It's a system trained on how lawyers actually work. This is building for the edge cases and making sure that that things are buttoned up. So you can imagine this in fields like medical, which would be another one. So a medical AI that misses an edge case is going to be worse than no AI at all. And this is going to be really important in spaces that you can't have expensive mistakes Nik (29:39) Yeah, when I was thinking about this one that- As you wrote about, I was thinking about, man, this is going to be great for our friends to do user research because this is where a lot of their expertise lies, right? It's companies who have teams who can go out and understand their users. They understand what they work on. have domain knowledge, right? And what they need to do is they need to figure out how to build AI systems for those specific. Like they're not actually going to be competing with a general purpose chat, GBT or a general purpose. Google Gemini. They might use, they might build on the actually most likely they are they're building on those via API. access, but actually there's so much more that goes into making that. And so, yeah, as I was reading this, I was like, yeah, this is where, for example, folks who are doing user research out there and have a deep domain knowledge are likely going to be able to really show their value and really be able to create much more valuable AI systems than just, okay, let's throw it at a general purpose AI. Dan (30:34) The third horseman is, I think, probably the most ambitious, and this is AI as operating system. So this is almost the opposite of the other two, which is it doesn't enhance your apps, it doesn't specialize in a field, it is the operating system. It's like one place to go, one intelligence to talk to. everything flows through the conversation. It's not an app. It's not part of an app. It builds the app on the fly or calls a third party app whenever it's necessary. And I think this is also when we're starting to talk about AI first hardware. This is devices where AI isn't a feature of it, it's the foundation of it. So the kind of app paradigm really dissolves and you can describe what you want and the AI figures out how to deliver it. Nik (31:29) This one to me feels very much like if you are trying to imagine what the phone is gonna be in the next 20 years, it's that this is the vision that. fits that a little more, right? I don't go to my apps I go to my AI and then my AI figures out stuff for me. I just interact with my AI. That AI might interact with a bunch of other AI and a bunch of other applications that exist out there, but like I don't have to think about it. I don't have to download stuff. You know, I think this is a further out there vision. I guess a question that I have, yeah. Dan (32:02) I think it's further out, I do think, like if you look at the integrations that OpenAI was trying to do, where it integrates things like Notion and a whole bunch of other things where it could call those or bring information in from them, they're definitely aiming for this. Nik (32:20) Yeah, that's a good point. think you're right. I think they are aiming for this. And I think that, adding, other applications and being able to interact with them through different means and then just do the tasks that you want to do with them. think, yeah, I think that's a really, I think that's a really good point. I think they are probably aiming at this. question is, it how far are they going to be able to go or where does it break down? Dan (32:40) I think it's gonna break down a lot. think this is gonna be harder than it seems. it becomes a question of intent, knowing how much your phone can do and trying to guess what you need and what which app that you're trying to get to. Which which is the three travel app should I give you? how do I decide that? Am I just building a travel app on the fly? I think all of those are really hard questions. Nik (33:04) And the other thing there is kind of as we talked about last week, right? Who then, if you are a provider of say a travel service, someone's still got to book tickets or whatever, who does it pick for you? Who does it bring up for you? So that whole ecosystem of, who gets ad, supported or who gets promoted, I think there's a lot of work that needs to be done there. Dan (33:18) Right. the last one, I think for me is the most terrifying and somewhat most interesting. that's AI as filter. It's AI as a universal intermediary where every single digital interaction is filtered through an AI. Every piece of information that you get is filtered, is curated by an agent before it ever reaches you. And you don't, in this feature, you don't browse the web. There's an AI browser that browses for you and reports back. You're not scrolling feeds and AI is going to decide what's worth your attention. You're not going to comparison shop and AI is going to buy something for you. And the raw internet that firehose of information and content it becomes something that most people will never even see and I find this both interesting from a theoretical standpoint but boy do I find it scary from a ethical bubble increasing no common ground creation I just find this future a little scary. Nik (34:43) Are we like halfway there already? Like I feel like so many things, yeah. I mean, I guess I guess maybe now one could argue it's a little bit scary now. I guess, you know, one of the things here is that this made me think about that was who then gets to control the filter, the AI filter. Dan (34:45) Yes, we are, for sure! Nik (34:59) One could, it could be, we've kind of suggested with some of the last things we've talked about this idea of like, well, it's going to be a large company, right? You're going to use Google Gemini. You're going to use Anthropic Claude and Claude is going to be your filter. And you could say that. And maybe you're like, I like, I like these companies. I like what they do. And, you know, I, we do that with all kinds of apps, for example, right? TikTok, TikTok's algorithmic, recommendations is good, right? it'll find you great content that you will like. And a lot of people really like that. And so is that. It's not a bad thing, but... You know, how do you steer? That's maybe one of the things in this area of control. Now, one of the interesting ideas that some people have proposed is the idea that people could have you have your own personalized AI, your own AI that somehow understands you, it represents you. And that kind of speaks a little bit to the AI that understands the way you work. But what if every single person could have that? A question there is, how does that get designed? How do you design if you say, everyone can have their own AI that's customized to them, they've personalized it somehow. You now have to have a system that either has to be designed to understand the individual and somehow present itself and be controllable by the individual. Maybe, maybe not. Or you need a way for people to design their own AI. At least their own AI kind of tuning which I don't know. I'm actually now that I think about that I don't really know if I have a good example of like a modern system for doing that like I don't even really have a good way of saying like how to tune my ads I know I can go in and I do this for example for Google and YouTube to say You know, don't show me these kinds of ads. I'm okay seeing these kinds of bad, but I don't I actually really wish I could just say, hey, no scary movies, please no scary movies. Like I do not want scary movie trailers. I just can't do it. But you know, these machinist videos, like machine tools and, new tools and stuff. I'm fine with that. Show me all the new tools that you have. Dan (36:50) Just don't show me the Saul movie. Nik (36:53) Yeah, right. Dan (36:55) But well, you're right. mean, there's no central repository for all your settings, right? I mean, you have to go in and tweak all of those everywhere. And this kind of would solve some of that problem because it would eventually know, hey, don't show any scary movies. but show me power tools. You can see a lot of value here. And it's interesting because Nobel Prize winner and Carnegie Mellon professor Herb Simon, this was his vision of AI that AI would help focus your attention on what's important and get rid of all the distractions that aren't important. But Herb lived in another time where the internet wasn't controlled by five or six different large companies and who's going to provide this? Who's going to keep your data? who's allowed to show you things? Who pays for the content? why would places like Amazon agree to this rather than going to Amazon? You know, there's just a million questions about this. But I do think it's interesting, and so I put it on the list. Nik (38:03) Yeah, I mean, the thing I'm really interested in now and even us like talking about it is like, how do I, how do I design that? AI agent system for myself. Like how do I design it for, or for individuals to, to steering control and whether it's that it's got to be a self-learning system that can somehow update and you can have conversations with, or if it actually has like really explicit controls, I'm not sure what it is, but now I'm like curious. I do wonder if out there, right. as we talk about agent systems, like coding, maybe what we learn in the coding space of designing agents to do stuff. If we start to basically. learn some design patterns and some strategies, interfaces, tooling, that basically could someday, translate all the way down to say the everyday, consumer to say, okay, well, I have a way to kind of configure and set up my AI that then filters the rest of the world for me. Dan (38:52) Yeah, it's interesting. I don't know if I want to live in this world, but I want to live in some of this world, but I don't know if you can have just some of this world without having all of this world. Nik (39:04) as I'm thinking about this idea of like, do you or do you not want to live in this world? I mean, one of the things here is that I think us as designers, right. And arguably you and I kind of working at this intersection of AI, like this is a place where we have some agency and where we can actually steer like the larger systems. There's a lot going on here. We don't have, I'm not saying we can, turn the whole ship, but actually by thinking about these things and potentially developing tools, systems, ways for designers to think about stuff. Like this is a way for us to actually start moving the needle potentially towards a world that hopefully we all want to be part of. Dan (39:38) And as I like to often say, design isn't only about problem solving, it's about creating a more humane future. It's about designing and creating a future that you want to live in. So hopefully by having these kinds of conversations, we can start to get there. Nik (39:56) All right, well that's our show for the week. Next week, we're gonna bring you a couple of new news articles that Dan and I read throughout the week to try to think about what it means for design. We're going to speak next time about software that's too cheap to meter. What happens when you can just build as much software as you want because it doesn't cost anything to actually create? What does that mean for design? Thanks for joining us and we'll see you next time.