Dan (00:05) Well, all right. Welcome to AI and Design, where we explore how artificial intelligence is reshaping the world of design. I'm Dan Saffer. Nik (00:14) And I'm Nik Martelaro and we're faculty at Carnegie Mellon's Human-Computer Interaction Institute. Each week, we break down the latest AI developments, dive deep into topics that matter to designers, and talk with fascinating guests who are right at the intersection of these fields. Dan (00:27) Whether you're a designer working in AI or an AI practitioner interested in design, we're glad you're here. Today we have a very special Hot Takes episode because there was simply so many stories we couldn't decide. Nik (00:42) We have stories about AI brain fry, training design judgment, an open source alternative to Figma, the future of AI images, and much, much more in all 14 stories with our hot takes. Dan (00:53) How it will work is one of us will give a quick summary of each story, and then each of us will give our brief hot take on that story. Then we'll move to the next story. It'll be like AI speed dating. Nik (01:05) So let's kick it off with another AI fatigue article by the Harvard Business Review. This article talks about brain fry, in scare quotes, a state of mental exhaustion that occurs when users delegate too much critical thinking to AI. It says that while AI increases speed, the constant context switching and the effort to verify AI outputs can actually degrade a professional's creative problem solving abilities. Dan, what's your hot take? Dan (01:33) When I read this story, the first thing I thought of was you, Nik, because of what we talked about last week. And you were like, I'm getting up and I feel like, I should be feeding the machine more. And so I think that this is just one more story that is a hundred percent true. And I think everyone I talked to who is using AI. A lot is definitely feeling this idea of AI brain fry where you start to feel like you can't do anything without the AI, even things that you're able to do. Not that long ago. I feel like the AI is starting to encroach upon those things. How about you? Nik (02:16) Yeah, and I think that also it really is causing this sort of mental tiredness. And so maybe just as a thing for our listeners, keep an eye out for it in yourself. Maybe try to avoid it if you can. Dan (02:27) one of the things that they talk about in the article is this idea of being able to have more human in the loop time basically where you were getting a lot more involved in what the AI is doing rather than just pushing the button. OK, next story. Shreyas Doshi says that product sense. which is the ability to make correct decisions under uncertainty, is the only skill that really matters in the AI world. And what he means by product sense is this intersection of empathy for the user and deep understanding of the business domain. And he claims that while technical skills and data analysis can be taught or automated, the intuitive judgment required to know what to build is going to be the ultimate differentiator for high level product leaders. Nik, hot take. Nik (03:19) Yeah, I agree with this. I mean, this is what I feel like I'm seeing in my own work. I feel like this is what I'm seeing in a lot of my students' work that are using AI to develop basically new product ideas. You know, they've got to say, build this. This is what I want. This is how it should feel. you can start to offload that a little bit to the AI, and people have tried, but actually I just read like a ton of reflections. from project two from the students in my class. And all of them were saying how, ⁓ if I just like let it go, it will start making recommendations that don't align at all with what I believe is a great product experience, just whatever else is out there. It's basically copying all of the known patterns that we have. And then it's also acting in a way where if I mention anything, it's just agreeing with me. And actually a lot of the students are trying strategies to sort of kick them out of that mode and get the AI to step back, but actually to like ask them more probing questions to make them think a bit harder. And then actually they're finding that that's really helping them. And so, yeah, I agree. I think that the skills that we have that have always made us great designers having that empathy. Being able to basically think about what are the implications for a decision that I'm making. Being able to think strategically and being able to curate what it is that I'm seeing. agree, I think this is what's gonna set people. Dan (04:43) Yeah, and I think knowing what to build is going to be really, really, important as the cost of doing this stuff goes down. You're just going to see people building everything and anything. And while that may seem like a great idea, think, and I read somewhere that someone was like, who knew that the slowness of some companies was actually a feature, not a bug, because it does take you time to be like, should we actually spend time developing this? Should we really think through it? And I think that judgment, that product sense as Doshi says here is going to be a huge differentiator Nik (05:25) just as a last comment on this, it reminds me of a phrase that I've been using, which is, in a future where we can build anything, we have to design everything. thus, I would argue designers need to start getting to work, thinking about what it is that we actually want to see in the world. All right, next up, news that Google Gemini, is working on a markup editor for NanoBanana images. This would be a new feature that allows for a more granular manual markup on AI-generated images where you could sort of select things and then mark them up to change them. It highlights this trend of moving away from black box generation towards collaborative editing where users can circle or highlight specific areas of an image to request targeted modifications, and it bridges a gap between prompt engineering and maybe more traditional graphic design. Dan, your hot take. Dan (06:15) Love it. I love this kind of combo of prompting and direct manipulation because it is so much easier to be like, well, I love all of this, but I don't like this little one piece and circle it and change that one piece to black or make that bigger or whatever it is. That combo I think is so powerful. if I could say what the major interaction design pattern was gonna be in the next two years, it's gonna be that one. It's that combination of prompting and direct manipulation tools. I really, really dig it. I'm glad that it's starting to come to NanoBanana and out into the world. Nik (06:58) Yeah, I'm excited for the experimentation that's happening here. I think that there's going to be a lot of really cool design patterns that are going to start emerging for this mixed level of autonomy and generation, prompting, direct manipulation. Actually, it reminds me, this was something that Andrei Karpathy talked about in his software 3.0 talk that he gave at Y Combinator a few months ago. He talked about sort of a slider that you could almost imagine. saying how much do you want the AI to do something? How much do I want to direct manipulate? How do I want to do things in the middle? And I think we're going to need to figure out what those interfaces are for the middle. And I think it's going to be really interesting to see as people build these to then start watching our users use this stuff and see where it is where they're, where are they doing more, where they want to prompt, where are they jumping back into direct manipulation? And then where is it that like none of those are working and you actually need new interface design. to actually pick up the slack there. Dan (07:52) Next up, more hotness, a piece by Mike Chambers called When the Model is the Machine. And it discusses the paradigm shift where the AI model itself becomes the primary engine of the software rather than just a feature. So the idea here is that AI agents don't just help build the software, they can completely replace the need for it. And we see this happening with SaaS stuff right now, because it's like, well, if, your sales team uses an agent that can pull data from your CRM and draft an email and update the pipeline and generate a forecast, the question is like, well, should we be using Salesforce or some other SaaS tool? It becomes, we need a tool to do this at all? And Nik, what do you think? Nik (08:39) I'm actually a little skeptical of the SASpocalypse idea, but that's me. Part of the reason why is because the intelligence that these tools bring, which is something I do think can be replicated or embedded in with a generative AI model, is just one part of it. There's still the whole interaction layer. There's the viewing of the data, the management of the data. I think that moving things, if you were just imagining this stuff moving into ⁓ a chat bot model, I don't actually think that's gonna work at scale. I think there's a reason that a lot of these products are designed in such a way, and it's because of the way that they've been refined over years about how people think and how people work with this information. Now, that being said, I do think that there may be something to the generative UI aspects of things where different, team members might be consuming information in different ways and it's generating UI and that could be a really that could be a big competitor but I would almost argue that it might be the companies that build these products already that are best positioned to create those those new generative UI interfaces for their products and then the other thing is is that for as much as a lot of the generative AI is really powerful and it does a lot of stuff because it isn't deterministic, it can actually be really frustrating and hard to diagnose when something goes wrong when sometimes what you want is a more deterministic thing. I'll say I've actually been building some almost like tools for myself to do these types of, you know, almost like customer relationship management stuff. They're small things. But actually the funny thing is that what I'm ending up doing is writing deterministic code. I'm having Claude write basically deterministic Python code and then building me an interface that works for me. But I'm not actually having the LLM do anything after it generates the tool for me. Dan (10:28) Right. And then it's right then because then it is it is a it is a separate thing. You're not calling it from the LLM. It is just there is another piece of software. Nik (10:38) That's exactly what it is. So basically it helped me think through what it is that I wanted, but then it found, some other code that it could use and generate that would process the data for me and then give it to me in a way that worked well for how I was thinking about it. But it didn't use any of the LLM to then do the intelligence. All the intelligence was actually some old fashioned AI. Actually, in many cases, it was some other machine learning model that was trained on something else, but it's just sort of an off the shelf model to do something and it works just fine. Dan (11:08) Yeah, I, as you were talking about this, was thinking, a, I agree that I think the SAS apocalypse may be overblown. And B, I was thinking that one of the things that these companies have that you wouldn't have building this on your own is that they have this large pool of collective behavioral data about how people are using their product. And that is super valuable information to have when you're doing things like generating UIs or refining your product So that is a really powerful advantage that a lot of these incumbents have having that data across many, many industries in order to improve the product. I think they do have a leg up on just any company rolling their own in order to compete with them, which actually brings us to our next hot take. Nik (12:06) Yes, so the next story is from Sid Ramesh. He asked the question, if the thing you're making can be reproduced by a motivated stranger with a credit card and a clod subscription, what exactly are you selling? Not the technology and not the features. Those are table stakes now. You're selling user experience and proprietary data silos. It challenges founders to think about what remains valuable when the AI part of a product becomes a commodity. Dan, hot take. Dan (12:32) Yeah, I mean this is interesting because it's all about like what are what are your moats? You know when all the models are basically. Operating at a really high really good level, which I think we're seeing now amongst at least the big three. The models are all operating really good. What are your differentiators? One is going to be the user experience and Are you focused on something in particular or a particular context and connected to that? What data do you have about that context? And is that data proprietary? So not anyone else can just come and build off of it. And I think that is going to be where there's going to be a ton of value at the confluence of those two things. Nik (13:20) I'll say we've experienced this. So in the lab, we've been building some interesting sort of intelligence tools to help people brainstorm better, come up with better ideas and things. And one of the things that we've been finding is that A, we'll bring this to experts and while they'll say like, there's good stuff here, actually the AI is doing a pretty good job. There's all this stuff that you don't know. that actually like my company has a lot of information on. If we could just connect this. Actually one of the things that might really set a thing apart is can your application, can the thing you're designing integrate well with the knowledge that's within a company already? Can you connect to their databases? Can you get their information? And there's a lot of companies that are thinking about this and are doing this and there's companies even that are helping get your information if it's not in a great. space to like be like that. think it's a little funny like we're back to like the intranet wave again. But interestingly enough, actually, if your company has an intranet with like a internal wiki, and you can get an AI to parse that now all of a sudden, you, you can actually start working with that. But then yeah, exactly. The user experience. And that's the other thing. Chat is limited. And right now, at least we're still in the we're still in the you know, v zero era of everything's just a chat bot when in reality we're finding that good interface, good information layout, good information architecture, right? Good features that help people parse, especially when it's complicated information, when it's a lot of information is what's going to set that apart. And I still think that that's where us as human designers have a big leg up in being able to understand that and build that for users. Dan (14:58) We haven't talked nearly enough on this podcast about new methods that are going into the design process. We've talked a lot about how the design process is changing and warping, but one of the things I think that is super helpful at the beginning of the design process is when you're doing some kind of research is to figure out what kinds of data can you get your hands on because that data plus AI capabilities. is going to be one of the things plus your context. Those are going to be the things that are going to be able to power some of these really amazing ideas. If you don't have the data, great. You can have an amazing context and you can have an amazing model, but that's not going to help you unless you have really good data to back it up. Nik (15:47) Yeah, I mean, this is something right when we even just think about user research, right? Like while it's going to be pretty cool to use things like synthetic users, you know, AI generated personas, this and that. I mean, those things, how long is the shelf life on them? And the reality is you probably need to be collecting new information all the time. Dan (16:06) I'm starting to sweat here already. Our next story has the best title of any of the stories that we're covering this week, which is the sacred values of future AI. And this essay, it explores the concept of sacred values, which are these principles that an AI might be programmed never to violate. despite or regardless of the utility. So it questions whether we can successfully hard code things like human morality into AI systems. And it warns of things like brittleness that occur when an AI encounters a situation where those two sacred values come into conflict. the trolley problem is a, a, is a traditional version of this and it's kind of a cautionary look at the long term governance of super intelligent systems. Nik, you got a hot take on sacred values. Nik (17:08) I mean, this is just a very thorny area of ethics and arguably AI ethics. You know, we've been thinking about this for a long time. I I think of Asimov's laws. The reality is that actually Asimov's laws even are like, ⁓ you can run into situations where you will violate the laws or you can't, you know, you can't make two things be congruent with each other, which is kind of the point. You know, it brings up how people are trying to deal with it. I'll link back to the Claude's soul document. talked about a few episodes back, their idea of constitutional AI, where what they're trying to do is design this constitution, this sort of document that tries to get the model to act in an ethical way. And one of the things that they talk about in that is that they don't actually have as many hard rules on things because it's really challenging. Now, of course, in the news recently, Anthropic taking a stance with some very hard lines that they don't want to cross in regard to the government use of AI systems. But yeah, like in general, though, they're not operating with this sort of sacred or hard hardline things Yeah This the debate goes around and round The article is really interesting and I think it's worth taking a deep dive into it But you know, the reality is is that like we're gonna be talking about this in Episode, you know 150 when it when someone writes another article about something like this again Dan (18:42) I hope this is something that we keep seeing and debating and talking about because frankly, I'm not seeing a lot of other people step forward with something like the Claude Soule document. And I'd like to see it personally. That's my hot take. Nik (18:57) Yeah, one last thing I'll say about this article that is kind of interesting is some of the proposed, you know, what they're talking about is proposed interventions. And I do like the idea of sort of this continuous, you know, revisiting of something and reflecting on it and then updating these things. This is how people work. And I think that one of the cool things is we're seeing is that this reflective loops are really effective at getting agents to do stuff that we generally like and that generally work. So the code agents are using basically these reflective loops to say, well, what did I do? Did this work? no, it didn't. How can I fix it? And let me update. Let me try something else. And in a way, one can almost think about the sort of values as maybe there's these very long reflective loops, and arguably probably human. human's involvement in these very long reflective loop cycles as a way to continuously update these things. I can feel the heat now already too, Dan. So next thing, a new tool, Open Pencil, is a new initiative focused on creating low latency open source tools for digital sketching and whiteboarding. It offers an open source alternative to Figma, basically, but with native AI integration. Dan, what's your hot take? Dan (20:10) My hot take is this is probably going to be Linux on the desktop that I just don't see it catching on. Maybe if it would have been a couple of years ago or something like that. I think that the Figma base is very established and very hard to break at this point. Just ask Adobe. And I think that Moving to a new tool is a big adjustment and I think people are already moving to a new tool and that tool is Claude or Gemini or moving more in that direction rather than to another Figma style sketching drawing tool. I admire the effort and I think it's an interesting idea. I don't see it going anywhere. You? Nik (21:00) I think it's a fair take at the same time, Dan, this is the year of the Linux desktop. I, for folks out there, I live also in like software dev world a little bit and everyone is super hyped on Omarkey or Omarchi. I don't know if I'm pronouncing it correctly, but you know, that's the new arch distribution with apparently good presets from DHH. Dan (21:06) Ha ha ha. Nik (21:23) But actually, no, legitimately people are pretty excited about it, at least in the software dev world. Is it the year of Linux desktop for everyone? No. That being said, I think this is a really cool alternative for folks who want to stay open source, folks who have a strong belief in using open source products, folks who want to have more definition and potentially more say in the design of their tools. I'm for it. I'm a big believer in open source projects and open source. Software so, you know, I'm excited about this. I think where it could potentially get exciting is in the same way where we're Openclaw as this big open source project where people are building all kinds of zany stuff and what's most likely gonna happen is that the big players are going to start incorporating some of the best ideas from that but all the experimentation is happening out in in open claw land we could potentially see that here with a few extreme user designers who are also willing to jump in with this open source stuff and connect AI agents to it. We could see a lot of really interesting new use cases emerge. And arguably, those could then find their way into larger, more professionally managed software packages. Dan (22:36) I don't know. Yeah, this is hot takes. So hot. Nik (22:40) That's fair. mean, hey, this is me. This is sort of the open source optimist in me. Yeah. Dan (22:46) Our next story is for Muhammad Dani Asarofi on training design judgment. So this article, it's all about how to transition from being kind of a junior pixel pusher to a senior designer by cultivating design judgment. Kind of like our last thing was all about product judgment. This outlines lot of specific exercises for analyzing why certain interfaces work and why others fail. And it moves past kind of aesthetics to really functional critiques, which is awesome. Like moving kind of past UI and more into UX land. And one of the big takeaways here is that, you know, being able to like read products through the lens of business goals and user psychology. Designers can better justify their decisions to stakeholders and ergo move from being a, know, pixel pusher up to a more senior designer. Nik, what's your hot take? Nik (23:46) I mean, I think that we should always be cultivating our ability to provide this sort of level of design judgment to determine why we think different interfaces and interactions work well, why they don't, do they fit with the goals. Because, I mean, that's how we mature as designers. I really do like this idea, though, that it's saying, hey, let's try to maybe almost think about what actively can you do as an exercise? to help you build this skill. Because that's something that I don't know if we're always doing. You know, in classes, we might make assignments for people. But, you know, once you're in the field, once you're working, you you've got a job to do, you might not be sitting down and be like, well, let me do these exercises. But you're gonna get that sorted by osmosis by just doing your job. And eventually, you know, you start to develop this sense. But I really like the idea of actually doing exercises. mean, I'm actually thinking about it like in the same way where software developers have had tools online to help you learn coding skills. And actually, they help prep you for code interviews. We could almost imagine a similar tool for designers to help you start developing your judgment. Actually, this could be a really interesting product that you're given a brief. you're asked to do a critique, you've given some set of design goals. Almost I'm thinking about junior designers using this tool. Hey, I don't know, maybe this is like a startup in the same way that Leadcode is there to help you prep for your interviews and stuff. Because that's the other thing too. This is the kind of thing that I definitely have heard of in a design-oriented interview, like tell me why you're making these decisions. And they want to see your thinking in this way. Dan (25:19) Yeah, absolutely. And in the advanced design studio class that I teach, one of the very first thing that we do now is have the students really start to break apart an interface and see how it works and figure out why stuff works. And in the fundamentals class, we have them look at a whole bunch of different interfaces and try to read them and try to figure out like, are there problems with them? Are the, you know, what's working from an interaction laws standpoint versus what's not working. Why are things broken? Why do things work? Why do things not work? So I think this idea of design product literacy is a really interesting one. as AI starts to pump this stuff out, being able at a glance to be able to examine these things with that same design eye, you can say, well, this is not working. So I think this is a skill that's going to get more and more important, not just for a junior designer trying to be senior designers. At this point, we all need to be senior designers. Nik (26:27) Yeah, I'm actually thinking now imagining like working at a company being like, Hey, if we're out there, maybe we're trying this stuff. We're trying AI design tools out and maybe having these sort of like round table critiques where you come together and then basically do this exercise together with your seniors, your juniors. This could be a great way to learn from each other. And a great way for basically your team to kind of cultivate this skill so that everyone is sort of leveling up in their ability. And then maybe that also helps you. start using these tools better if you choose to use them. All right, the next story is hot, but maybe not in a good way. M.H. Dempsey argues that the pace of AI advancement is outstripping our legal and ethical frameworks so quickly that we have roughly one year to establish meaningful guardrails. He warns that once models reach a certain level of autonomy and ubiquity, reining them in will become technically impossible, making the current moment the most critical moment in history. making the current moment the most critical period in the history of technology regulation. Dan, hot take. Dan (27:27) I just wanted to put my face in my hands when I read this article, because this may be the most critical period in the history of technology regulation, but we are definitely not in a period of technology regulation. All the people, at least in the U S federal government are absolutely unwilling to do anything to put the brakes on the AI train. In fact, the opposite. It's like, what, what more can they do to make sure that that nothing gets in the way? so yeah, this, this really scared me. mean, my hope is that. Individual states like California, which is very powerful and where a lot of these AI companies are based and like the EU. would be able to step in here and really help put some guardrails around these things because I agree these things They're getting very autonomous. See see Claude bot and and ubiquitous like more and more people are using these tools and they're being built more and more into all these different workflows. And so yeah. reining them in or putting any kind of regulations around them is just going to get harder and harder and harder. Nik (28:38) at the end of the day, at least now, no matter how autonomous an AI system is in doing a job, someone is asking it or having it do that job. Like these things aren't just roaming around the internet entirely yet. I guess some of the Clawdbot stuff is getting there where they maybe start to define their own goals and stuff. Dan (28:56) Mm-hmm. Nik (28:57) I'll say I still feel like that is maybe a little bit far out future, but I guess maybe we only have 12 months. I mean, I still think back though, at the end of the day, someone is deploying something and they probably have, they need to have some kind of responsibility behind that. And that is something where I'm not entirely sure if even that level of regulation is totally there. We have it in certain fields, legal, medical. Financial stuff, but there's a lot of places where we basically don't have any serious strong regulation Because you haven't needed it Yeah, it's this was definitely a tougher read and I'll say I do know that you know, there was a moment Probably four five years ago where it felt like responsible AI calls for regulation, there were people who had job positions in companies, at many companies that were the responsible AI lead, and then a lot of that just vanished. And yeah, I don't know where things are going without folks doing that. I hope that if you're out there and you are doing this, thank you. Please continue to the best you can. Dan (30:01) Okay, the heat is on. Next piece, an article from Peter Gaston on the future of AI images. And this piece kind of looks past the novelty phase of AI art and the little things that people can make is predicting a future where AI image generation is completely invisible and baked into every creative tool. And he discusses this move towards this more latent space exploration where designers, again, are not just typing prompts, but navigating in a multi-dimensional space of possibilities using things like our old friends, sliders and brushes and spatial layouts. And I think this is a really cool, interesting vision of the future. Nik, what was your hot take? Nik (30:51) Yeah, I I like this idea of latent space exploration. I'll say though, full disclosure, I have a student, my student David Lin has been really interested in these ideas and he's explored and built some interfaces for kind of how would you work with the underlying information within ⁓ a generative model to navigate and come up with ideas and have it generate things. So, I mean, I think that's really interesting and astute. And yeah, I think that this idea that it's basically going to be everywhere and you are going to just be in these sort of generative worlds that you need to navigate. I think then the question comes up is, so what are the interfaces for that? How do you interact with those sort of generative worlds? And what's going to help people create the things they want to create? Or what's going to help AI systems Understand people and create what they want even if they're not explicitly asking for it and it's possible that even the AI stuff Will be able to work within this latent space Better to basically have the same type of controls That it'll work better for them. All right, it sure is getting hot in here, Dan. Our next article actually from Mohammed Dhani Asarofi, the same author from Training Design Judgment, drops another truth bomb and predicts that AI won't kill design jobs, but will separate strategic thinkers from production-only designers. He says that AI will handle the commodity work, creating buttons, icons, and standard layouts, which will inevitably eliminate roles for designers who only do production work. But he says that this creates a massive premium for designers who can handle high-level strategy, system thinking, and complex user advocacy. Dan, bring the heat. Dan (32:27) Yeah, I think this is the probably brutal truth of things that I think most people are unwilling to say is yeah, if you were kind of stuck in that in that lower realm of production, yeah, there's things are going to get real tough for you real fast. And I think that is true, which is again why we teach all of our students to try to move past that as quickly as you can to get into these more strategic building thinking type roles versus staying staying kind of down at the putting components together and those kinds of things I think is just a scary place to be in the next couple years. Nik (33:12) I will say though, it does make me think, maybe just to try to come up with a counterpoint. Are there going to be folks though who just have, they're so creative, they work in production and they're not maybe working at this higher, more strategic level, but their work is so out there, it's so novel, it's so great, you know, that, and it's generating so many new ideas that like this is, it's actually, they will still be able to live in this production mode. And I guess maybe I'm thinking about it as like a different type of maybe production. think like if you're creating just buttons, icons and layouts, then yeah, maybe it is probably that that stuff is going to be, mostly that stuff is often determined by your design system team, at least in digital design, but with any component, know, designed product, like the components are figured out and then you're just figuring out how to put not just, but you're figuring out how to put them together, that stuff will probably be handled more by autonomous systems. And so yeah, I do think that there's this high level strategy person is that's really where people are probably gonna need to be working more. But I do wonder like, what does maybe a counter argument production person look like? Dan (34:21) I will say as a caveat to my own agreement with this concept is there definitely were people that I have worked with in the past, and I'm thinking particularly of someone who is like an amazing iconographer. And I worked with an amazing typographer and they have such very specific, very skilled what I guess is called production work because they are such amazing crafts people that that it elevates that work to something else and I think last week when we were talking about Jenny Wynn's the long T that they have that skill where they can go really really deep into this one really particular area. Or maybe they are people who are, you know, amazing. I don't I don't know how much long like pure prompting is going to last or pure prompt engineering, but maybe these are people that are extremely good at that, at being able to tease out. Requirements and things like that and being and being really good about translating that into things that AI can understand really well, although. now that I'm saying that, I'm thinking like, well, that's gonna have to be almost everybody. Definitely sweating here. So, Pittsburgh Yinzer Brad Frost, Mr. Atomic Design and a pioneer of design systems. He dropped a video demonstrating this concept that he calls real-time UI. And the idea here is that interfaces are no longer pre-designed components, but that the components are generated on the fly. to media users specific intent. And he does this in the video just kind of by like talking through it and watching these components and the interfaces get built. It's pretty cool. And it really shifts the role of designer from this person drawing screens to this idea of authoring rules that govern the system and how the system assembles itself. in response to live data and AI reasoning. How do you think about that, Nik? Nik (36:34) Yeah, I mean, I am excited about generative UI as an area that we'll begin working in. And I think that the concept here and maybe some of the design language that's forming around this that Brad might be able to put together is going to be really valuable for the field. But yeah, I definitely see this as becoming an area of design work where you are authoring the system that generates not just the system, the interface itself. I think the hardest things here are gonna be figuring out how do you understand what a user's intent is and how do you actually generate things that work for them. Because while from a future vision perspective, I think it's super compelling that things will just matter compile for you, you have to then have really good Authoring systems behind them that are like I understand your needs and what you're trying to do and I'm gonna give you something good Or else what you're gonna end up with is just a bunch of live-generated awful experiences ⁓ Dan (37:33) Right. Right. Wait, that wasn't what I wanted. That wasn't what I wanted. Yeah, no, for sure. Again, you can generate anything you want, but it doesn't mean it's going to be good or useful or appropriate or what you need at that moment. Yeah, my students are working on this in our UI for AI class, they call it ⁓ temporal or ephemeral UI is their term for it. I personally, like Genui. I think it's kind of fun. But for them, it's, yeah, it's what happens when something appears and is there for a while and then it kind of vanishes when you don't need it. And I this is something that we're gonna start to see a lot of in a year or two or less. Nik (38:19) Yeah, I think we're going to start seeing it in small places, right? Like the, you know, we've, we've seen concepts, I think that are for this for quite some time now. Like Google showed, you know, I'm having a chat with Gemini. And then all of a sudden it generates cause I'm planning a trip and then it's like, here's your flight picker. And actually, cause a small interface is actually way better than us just chatting this through. It's just like, here you go. Confirm this. Yes. Good. Click a couple buttons. Good. Dan (38:28) Yeah. Mm-hmm. Nik (38:45) let's move forward with this. And then the interface goes away or it just scrolls up in the chat log. I think we'll see it there first. But I think it'll be interesting to kind of think about what are these larger concepts for full applications that do this. I think where we might start seeing it first in that sense is maybe in very expert user. land, you know, specific task and maybe in spaces where people don't have good software, because it's actually kind of weird, right? If you're an expert user, actually, this brings us back up to the article before on like, you know, SaaS products is like, if you are an expert at a SaaS product, kicking you out of that product is like a real pain, right? Even just having to learn, I mean, right, sometimes when we hire people, we ask specific skills in like this tool, like Salesforce. Not some other CRM, like we wanna know that you can work in Salesforce because you need to start being effective in this. So I don't know, it almost feels like, I mean, maybe this gets picked up more in spaces where people figure out there isn't good software for this, but a bunch of people think differently. And so we're gonna build a generative system and it'll be like the same kind of tasks, but it'll be generating interfaces on the fly. I don't know. I think that's one of the things I'd like to see more of is what are maybe some good examples of where this model could start to work so we can start experimenting there. Dan (40:06) Mm hmm. Are there places where this would be easily implementable and more effective than what we already have. Like how is this any better than a widget or just going to a dedicated app for? finding flights. It's gotta be better than that in order for it to be worth it. To build and implement, yeah, because what if it assembles itself differently every single time? Am I gonna be like, well, how do I work this this time? Like, oh wait, the colors are different this time that it assembled it. It's like, oh God. Nik (40:46) Yeah, actually, this makes me think about, you know, aspects of service. One of the things is when you work with other people, someone who's providing you a service, having the same person there who knows something about you and that you know how they interact, can be a very good experience. You learn how each other communicate and stuff. When you work with a business that, for example, has a different service representative, and for some reason they, differently from each other, that can be really frustrating. And so like that's that's like the analogy I'm thinking of here. Like if that's what my software did, actually, in many ways, that was the whole thing about service oriented software was that it became the same thing you had a you had a sort of structured and standardized service offering each time. Yeah, so I don't know. I mean, I feel like we're gonna have to design systems. I mean, this is where then memory becomes important. So you're have to then design and get those memory systems integrated so that when you when a user comes back Yeah, I don't know super interesting space And I don't have all the answers at all on how to design this kind of stuff I'm excited to see the Brad's thinking about it Dan (41:45) Yeah, I mean people do like consistency. I mean this is What fast food is all about right? I can go into I can go into any mcdonald's and order the same thing whether i'm in france or indianapolis Nik (41:59) Yeah, and it is like a sauna in here now. Our second to last story is how to write a good spec for AI agents, which is a meaty technical guide on defining requirements and constraints for autonomous AI agents, i.e. the code agents that you would use to generate your designs. Unlike traditional software specs that define if this, then that, an AI agent spec must define goals, boundaries, and personality. It provides a framework for engineers, PMs, and designers to communicate Dan (42:01) you Nik (42:27) what they want an agent to achieve without over constraining the model's ability to find creative solutions. Dan, what's your hot take? Dan (42:33) I wanted to kiss the author of this, who is Adi Asmani. And this is so meaty, so practical, and so needed that I'm hoping to start teaching this to my students when we get to the AI module. at the end of the semester because yeah, I haven't seen anything that is this good about this particular area before. I'm really keen on it. I really think that there's just a lot of things that he's talking about in the article, things like boundaries, I think was really interesting. Like, you know, hey, always ask first these things like database schema changes, adding dependencies, never do things like, committing secrets or modifying the command line config Talks a lot about not doing these kind of giant monolithic prompts, doing much more focused and then relevant context prompts so that you get better output. I really think this is a really, a really top notch article that I hope Designers working in agents dig into I hope PMs working in agents dig into I think it's I think it's a really really good meaty article Nik (44:01) Yeah, I mean it says it's if you look at the thing on O'Reilly it says 37 minutes which you know these days feel like oh my gosh that's a long time. Actually I would say then copy and paste it into your agent and say let's start implementing this help me implement this because you can do that actually. You know if I'm gonna say anything I agree with what you're saying here Dan I mean we we've been showing students how to do this I mean this has been a common strategy. in getting better output from your agent is to actually give it good specs, a good product requirements document. This is giving a lot of tips. If I'm gonna highlight a couple things here for people to check out, one would be the plan mode. This is something not everyone uses and people are still kind of learning about. The code agents often have a mode called plan mode. You can change it in the little selector at the bottom in like cursor or VS code. Cloud code, can say, let's use plan mode. And plan mode is a great way to like just start having the AI agent work with you as a thinking partner to plan out and think through what it is you want to build, have it ask you questions. And then it starts to kind of create and construct this document. So yeah, again, I think maybe a great exercise to do is grab this article. copy and paste it into plan mode and start working with an AI agent to develop the PRD that you then want to use to then feed to the rest of Cloud Code to say, all right, go build it. Dan (45:29) Okay, our last story is so hot. I'm surprised your devices aren't melting. It's called Breaking the Echo Chamber in Your Interface by Dora Schirna. And the idea here is that modern design powered by AI often creates these echo chambers by only showing us what we already like and telling us that everything is amazing. And so this idea is to challenge us to really be building in a lot more friction and a lot more serendipity back into the experience and encouraging users to encounter diverse viewpoints, unexpected content. And it's a call to action for ethical design that prioritizes human growth over engagement metrics. Nik, one final hot take. Nik (46:17) Yeah, I mean, this is cool. I like this idea. I mean, I'll say, we work on metacognitive agents to support designers. So basically agents that try to think about how you're working and then maybe try to push back. So this is cool. I like thinking about this though, from the perspective of user driven experiences, because yeah, I mean, so many times, you know, you're, well, I would like this. And yes, of course. Yes, that's a great thing. I mean, we had those, the anthropic ads basically were sort of highlighting the fact that OpenAI's agent might say everything you have is a great idea, when in reality that may not be what you actually even want. You might want something to push back. And so the idea of us starting to think about what is the experience design for that and how do you do that? And how do you do it in a way that doesn't upset a user and still helps them achieve their task? I think it's a really challenging interaction design problem. excited to see people work on it. And I think this is a cool article to get people starting to think about it. Dan (47:12) will say that one thing that I learned from working at Twitter was when you actually do start to put contrasting viewpoints in people's social feeds, one of the things that we found out because we were trying to do this kind of like Break the echo chamber pop the information bubble was that sometimes it made people really dig their heels in and double down on their position rather than be like, well, that's an interesting point. It's a harder problem than it seems Obviously I think that's a great goal. I think it is a much harder problem. Then it seems like on the surface breaking the Sycophancy problem is an interesting thing because as we've talked about previously, would you want an AI that challenges you all the time that is constantly like, don't think that's a really great idea. But there may be times where that is absolutely essential or times where it is important for mental health or for important decisions to have the AI be able to push back at you so that it becomes, eventually becomes more like a trusted friend who's not going to lead you astray than a random machine. Nik (48:35) we have the role of conversational designer, you conversation designer, and this is a skill that they should be probably thinking about and they may even have this skill. And this is why maybe they got into conversation design. I'm also trying to think like who else within organizations knows how to do this well within a team. might they be guides into how to actually put this into our products? Dan (48:57) Yeah, I mean the conversational UI folks are 10 years ahead of the rest of us in these kinds of things, how to phrase questions, how to be challenging without being confrontational certainly therapists are trained in this way. And that is certainly something that would be very helpful to start having on some of these teams, especially if people are going to be using AI as therapists, as we're seeing a lot of people, especially a lot of young people turn to and. we've seen some really bad results from that. So it'd be nice to have that voice being heard in the room as some of these decisions about personality are being made. And the last thing I'll say is that, definitely the conversational UI people are really good about things like teasing out intent. because they've been doing that with voice for a long time. so anything we can start to do to start bringing their knowledge back, not back, they've been doing it, bringing it out to the rest of us, I think is a really important and probably powerful thing that we should all start looking into more. Nik (50:16) And with that, man, that was a speed run. Uh, okay. Dan (50:20) Whoo, hot, steamy in here. Nik (50:25) So thank you all for being with us this week. Hopefully there were some stories in there that you thought were interesting. We'll have links to all the stories on our website in the show notes so you can find those and take a read for yourself. If there were ideas there that you want to incorporate into the work that you do. We'll be back next week probably with a bit of a slower episode, just a couple of stories. Yeah. And so we hope to see you there. Dan (50:50) Boy, I hope so.