Nik (00:05) Welcome to AI and Design, where we explore how artificial intelligence is reshaping the world of design. I'm Nik Martelaro Dan (00:10) and I'm Dan Saffer, and we're faculty at Carnegie Mellon's Human-Computer Interaction Institute, the HCII. Each week, we break down the latest AI developments, dive deep into topics that matter to designers, and talk with fascinating guests who are right at the intersection of these fields. Nik (00:29) Whether you're a designer working with AI or an AI practitioner interested in design, we're glad you're here. Dan (00:34) on today's episode. We'll be discussing Anthropic announced that it's Claude code and Claude co-work tools are being updated to accomplish tasks using your computer, just like a human would. Nik (00:47) Mike Watson writes about how AI didn't kill product intuition, it exposed who never had it. Dan (00:52) And then we have our interview with designer and author Chris Nossel, author of Make It So, the book about sci-fi interfaces and designing agentive technology. He's got a new book out called Designing Assistant Technology. Nik (01:06) But before we begin, a quick follow up to a story we covered last week. Google designer, MC Dean has been publishing a ton of open and freely available design skills for AI agents. Last week, we covered the 103 design skills that they published, including a number of them for accessibility. Now Dean is back and has created an entire design team of AI agents. We're gonna link to this in the show notes. Dan (01:31) And we should really have her on as a guest. That stuff's amazing. Every week, something new, something incredible inspired me to do a whole bunch of skills last week. yeah, is MC Dean. We're coming for you. But first let's go back and let's talk about the news from Anthropic. What is this news from Anthropic? Nik (01:53) Yeah, so Anthropic announced that Cloud Code and Cloud Cowork are now able to accomplish tasks using your computer, i.e. computer use agent. So this means that they're able to basically use the graphical user interface of your computer, not just do things through, say, software or programming, which is typically how a lot of actually Cloud and Cloud Code works. Like it writes code to do things, but now it's basically going to be able to click around, navigate around your computer. And this is ⁓ exciting and a little bit terrifying in some ways. Dan (02:25) I was gonna say, this sounds dangerous. This sounds like when we talked about open-clawed having free reign of your computer. I'm scared. Nik (02:35) Yeah, and so that's, that is the thing. Anthropic is taking a position where there's a lot of verification from the user that's required, before it takes certain actions. The user has to basically say, yes, it's okay for you to take this action. No, don't take that action. You can choose, where it can work on your computer. But yeah, I think you're right. Actually, this is in some ways, Anthropic's answer to the open claw. movement by basically allowing Claude to do much more on its own interacting with the software that you might interact with every day. I think that this is potentially exciting and it's going to change how people as they start to figure this out might do certain tasks. Where I think that this is interesting from a design perspective though is what does this mean for the way in which we design our software? Right now we primarily focus on designing for human users and their abilities. However, I do wonder, for example, if this is gonna change the way we might design, especially graphical user interfaces so that they work well for computer use agents. A very simple example I could maybe think of is there's a lot of software that has decided to issue text-based labels for many buttons. Although we actually know from good HCI that having a label and an icon for a button can be really good. I'll say, Dan, we're using a piece of software, Riverside, here right now that actually has no labels under the buttons. And so if we said, hey Claude, please work with this software to do stuff, it basically has to interpret just the symbols as opposed to maybe being able to read the text and then know exactly which button to go to because actually the OCR for these systems is really good. So, I mean, it's these kind of small things that I wonder if we're gonna start seeing changes so that software is more usable both for people, but also for these computer use agents. Dan (04:19) this could be a great push for more accessible design where you're using some of the best practices from accessibility to make your product more universally available, but not just available for people, but also for agents working on people's behalf. And. maybe that will get some execs to care about accessibility in a way that they are often reluctant to spend the time, effort and money to do that. So I think that's a great boon to everyone, especially as we're starting to see a lot of research reports coming out saying that the amount of agentic traffic to websites is increasing. By orders of magnitude, and some websites, it's more than the human traffic, which is staggering to me. But I guess that's the world we live in today. Now I wonder what does this do for places like Amazon? Amazon has pretty famously been very like, hey, AI stay out of our store, Nik (05:28) I mean, there's a lot of sites that are trying to sway ⁓ bots, especially bots that are hitting with really heavy traffic. There's just elements of load and performance issues that come up when you have too many bots on a site. You know, something else that you're making me think of Dan is that I wonder if this also from a user perspective, do they start choosing different products and services simply because they work better with their agents? Right? So like if I'm using a certain piece of software, like if it doesn't work well with my, Claude cowork agent, can I use something else? And of course it's also bringing up a question of, do you provide an MCP? ⁓ server and then that's really how an agent is going to operate and arguably that could be better because you have much more control as a company as a provider to say AI this is what you can do with our service whereas arguably when you use a computer use agent it's the AI can do whatever the human could do but on the flip side I actually think that computer use agents are interesting from the fact them being a bit more visible. Like when they work, it's actually like moving your mouse and clicking around on the screen and you can see stuff changing. And so there's this ability for you to watch as something's happening. designing for that experience I think is also gonna be interesting because if I'm watching something move slowly, which is kind of how it feels now, I've done a little bit of this, it's a little bit like, okay, ooh, it's kind of, you know, it kind of reminded me of like the first time ⁓ I got my robot vacuum cleaner and I sat there for like half an hour watching it, you know, saying, ooh, is it gonna get stuck? How's it gonna handle this carpet to, wood floor exchange and stuff like that. But after a while, of course, it works fine. And then I only think about it when it does get stuck. Is that the experience we're gonna be designing for? Or are there other ways in which to design this experience of a computer use agent for visibility verification, I think that that's a really open question. Dan (07:18) Just getting back to ⁓ Amazon. So earlier this month, Amazon actually secured a preliminary court order against perplexity. And the court ruled that perplexities common agent. violated federal and state hacking laws by accessing password protected accounts without Amazon's authorization, even if the user consented. nothing like that. And Amazon now has new policies in effect as of March 4th of this year that require any automated agent to clearly identify itself in HTTP request headers. and comply with the specific agent policy. and one of the things is that agents must not attempt to mimic human behavior. Nik (08:07) Wow, I actually wonder then given what you're saying is there could be a way for Anthropic or anyone to basically that's creating a computer use agent to say, I'm sorry, I really can't, I can't use Amazon, right? I recognize them on Amazon. I can't use Amazon. You have to use Amazon yourself. The question is, does a competitor, does Walmart come up and say, hey, Come on in, Claude. Buy whatever you want. We're here for you. And do people then start to change their habits because of that? I'm not sure. Dan (08:32) I think a company like a Walmart or Target or one of those could definitely do that because one of the reasons that Amazon is being so fussy about this is that they have a $56 billion advertising business and they don't want bots and agents screwing that up. They don't want bots to bypass sponsored listings and recommendations. So if you don't have those things, if you're Walmart or your Target or something like that, then you know, maybe you do. You're like, hey, come on in. here you go, agent. The door is open for you. Nik (09:11) Now, one last thing on this, do wonder though, Amazon is an investor in Anthropic and it's possible that they could work out a deal and we may even see a co-designed experience for Claude being able to use Amazon in a specific way. Dan (09:27) Does Claude have to make friends with Amazon's agent Rufus and then they can shop together like they're teenagers at a mall? That could be something. That's a show on the CW I would watch. ⁓ goodness. Our next story is from Mike Watson over at Product Party. And the story is called AI Didn't Kill Product Intuition. It Exposed Who Never Had It. the idea behind the article is that everyone thinks product intuition is spotting a winning idea where you come in, and you're pitching the idea and the PM is like, yeah, that's a great idea. Like you're on shark tank or something. And Mike's idea here is that real product intuition is knowing which ideas are fragile before you've really invested in them. You say, okay, this may be something. And so you say to somebody else, hey, like, poke holes in this idea or is there something here? And then you don't embarrass yourself with it like, well, yeah, that would go against compliance or yeah, there's laws against that or hey, we tried this before and it didn't go anywhere. So it saves you time that you don't have to pursue a project idea that is actually very flawed. And now things are a little bit crazy because that colleague is always available. Doesn't get tired. You can keep asking it questions as long as you've got tokens. And, if you have an AI with you, that's got the right framing and maybe the right skills and right prompting, it's going to interrogate your idea the way that a skeptical fellow PM or fellow designer. or another stakeholder would. It won't know everything because you have to provide a lot of the context here, but it's an interesting idea that product intuition is never about having good ideas. It was always about mostly getting rid of the fragile or bad ones. Nik (11:46) Yeah, I think that this is a pretty good point to make about what it is that we do when we are down selecting from hopefully the many, many ideas that we have. And that actually the strong sort of designerly sensibility and product sensibility is really about picking good avenues to explore because ultimately you're going to invest time, energy and money. into developing prototypes. And so you want to hopefully be on a couple of good paths that hopefully will turn into great paths or one great path that will be a, you know, a great product or service. what, is the metaphor? you plant a garden and then you see what sprouts, but you have to cut down maybe a lot of flowers to really find some of the best ones. I don't know if I have that totally right. Dan (12:31) That doesn't sound right you have to pull a lot of you have to pull a lot of weeds to get to the flowers I don't know something We're not cutting that Nik (12:33) There's something. Yeah, cut that. Okay. Yeah. But so, okay. But so yeah, I think that, I mean, I, and I think I generally agree with this. Now that the challenge I have a little bit with this is I'm not sure how good AI is at this because of the tendency for AI will give you something within what you ask. So can you ask AI to be skeptical? Yes, totally. I actually used a prompt. ⁓ once from I want to say noose research and no us where it was kind of supposed to be like edgy and you know It wasn't it was supposed to not be a sycophant. It was kind of the anti sycophant and ⁓ it was so mean and it was it it it now was it being skeptical was it trying to poke holes and things yeah, but like Also, I'm not sure if they were good holes like they were holes now Dan (13:26) Mm-hmm. Nik (13:29) And this is maybe a, Dan (13:29) Was it was it just like devil's advocate? I'm just asking questions here Nik Nik (13:36) yeah, exactly. Like ask questions. And I think those are good. I think on a strong team with a lot of team members who have good sensibility and can think in multiple different directions, they will think of those. They will ask that question, but they're also very quick to say, no. And the answer to that is no, no, I don't, I'm not, there is no hole here. I need to move on. I need to try to find something that's real. And so. I just wonder how good this is. And I'll say, I think it's super interesting to try to explore this. I think people should try it. I think people should try to say, make, be skeptical, find the flaws in this, but don't just assume that it's gonna be right. In the same way where if you say, hey, tell me why this idea is great, it's always gonna tell you why it's great. It's gonna tell you you're so smart and it's wonderful and everything's gonna be, magical but like it's going to do that because you told it to do and that is the predictive next thing it should say whether you're asking about why is this great or why is this not great Dan (14:28) And this is a really tough design problem and one that our special guest today, Chris Nussel, was talking about in his new book, Designing Assistant Technology. So let's head over and talk to Chris about this. Nik (14:43) All right, so we're here with Chris Nossel to talk about his new book, Designing Assistant Technology, AI that makes people smarter. And I'll just give a quick introduction to the book So when artificial intelligence is designed poorly, it diminishes people's skills rather than enhancing them. It can even make us less capable and more dependent on AI. In designing assistant technology, Chris Nussel provides a framework for how to use AI to assist users as well as mitigate the risks of de-skilling and over-reliance on AI. Chris, welcome to the show. I'm super excited about this book for a number of reasons, partly because I feel like it just, you've hit the moment pretty well. And then in addition, my own research, I'm quite interested in and excited about AI that helps Christopher Noessel (15:20) Yeah. Nik (15:26) make us think more, that helps make us more excited. So I was really, really jazzed to see you coming out with this book. Maybe to start off with the first question, you know, your last book was about agentive tech, and clearly we're seeing a huge increase in agents that are out there, and that's really what has been kind of in the zeitgeist today. I'm curious, what's motivated you to come out with this book at this time? Christopher Noessel (15:52) I appreciate your noting that this book is landing at the correct time because that prior book did not. I was two years ahead of that particular curve. And so ⁓ even my publisher, was like, Chris, this makes a great deal of sense to me. And then there were kind of crickets for a while. And so he's like, well, maybe Chris doesn't know what he's talking about. And then Andrew Ng started about a gentic and that actually people mishear it as a gentive. like I still get SEO hits on the book from from all the hubbub that's going around agents. I do present both of these concepts side by side a lot as part of a model of agency, right? You've got manual tools where the user is the entire agency. You've got assistants where they're the primary agency. You've got agents where they are the secondary agent. And then you've got Automatic tools where there's nearly no agency involved except by the creators of the automation. So when I was Realizing that concept of a gentive technology Alongside well, I was like, well that's gonna sit next to assistant But I thought the new thing was the thing I thought I would have more to say about so I wrote the agent of book first got it out there and then I held on to this concept. said, well, I know I want to talk about assistants. It's important to talk about them. But I felt like for the most part, that is what interaction design and software design has been doing for decades. And I didn't feel I had anything new to contribute. But then with the blossoming of generative AI in 2019, And my beginning to work on it around the same time, this concept of, how can we make AI smarter? That's when I came across the notions of deskilling and over-reliance. I was like, whoa, okay, now I do have something to say that I don't see written out there So since it seemed new, since it seemed timely, I was like, hey, Lou, I finally got that follow-up. Dan (17:47) And you were ahead of the curve then in 2019 thinking about working on this kind of stuff too. Was that when you were at IBM? that? Christopher Noessel (17:55) Yeah, yep, yep, yep. think I did some work for years with the Institute of the Future. And I had dated an employee there. So I was very familiar with futurism. running the Sci-Fi Interfaces blog for years also has me thinking about futuristic concepts. And one of the concepts from futurism is that of a signal out in the world. And in futurism, that means like any time you encounter as a futurist, something that feels like it might have bigger, deeper meanings, you kind of put it into a mental backpack and you keep hold of it. if you begin to collect other examples, you're like, wait a minute, I'm beginning to sense a pattern here. And that habit has stayed with me. So with the agentive book, was doing a lot of traveling as a consultant at the time. And I was thinking a lot about the design of my automatic cat feeder because it was doing everything that it... I needed it to do when I was not available to watch it. So I was like, this is kind of awkward. And then I had seen Idiocracy, which also takes a ⁓ central explanatory role in the new book. But that triage moment also caught my eye and felt a piece to that cat feeder. And finally, I was doing some work with a financial services company. I don't think I can name them, but about a robo investor. And so those things all felt like a piece to me and I suddenly having three data points, rich examples, should say, data points makes it sound smaller, rich examples to really think through is what kind of began to coalesce that. So I have had this habit of looking just ahead of our current moment for a couple of decades now, and it has not served me poorly. Nik (19:27) this is definitely written, with a practitioner in mind. And I think that's one of the things I like about this book is that it's got a ton of examples. Actually, the book itself is beautiful, full color, with all of your wonderful sketch notes to explain certain topics. again, one of the things that I really like is your utilizing your use of examples of good and maybe not so good examples of assistant technology and the different concepts. I'm curious, are there some examples that you have that are really exciting or you think are like some of your favorite that you would sort of, you know, recommend to our listeners like, hey, when you get the book, you know, go check out this one, because this is a really great example. Christopher Noessel (20:09) I fear I tipped my hat in the marketing of the book. Lou Rosenfeld, always the Rosenfeld media, always publishes the first chapter to any book on their website. Well, the example that I would probably most point to is that one that opens the book. And it's partially because it is a first-person perspective, but it's an example that many people understand quickly. So, and that is of driving, right? Google Maps is ⁓ really well designed from a particular perspective. Doing the task, I can identify where I am, where I'm going, what I have to do next, what lane I'm going to be in. I can switch it to audio mode, even though I'm a much more visual person if I need to, all those sorts of things, really well done. And yet, after a year in the Bay Area, I couldn't find a neighborhood that I coworker casually mentioned to me one time. They were like, come over for a barbecue. And it struck me at that moment. I was like, wait a minute, how could I have been driving this network for a year and not have internalized what I internalized in six months in a much bigger, trafficker area? But let's dive into the things that aren't there. I present five, what I call universal assists, which are the categories by which assistants can assist users. And all of my examples, I tried to ⁓ cherry pick for their communicative properties in context of one of those assists. So one of my favorite is the Red Stripe app. I didn't even know about this app because I have full color vision, but a friend of mine is profoundly colorblind. And eight years ago, we were at a party and I, it just struck me cause he was wearing like a tie that matched a cravat in his pocket. And I was like, Hey, how did you do that? I didn't think that you were good detect those colors. Do you have like some kind of, you know, bag system at home where you like always keep those things together? And he said, no, I use this red stripe app So the Red Stripe app is an app that uses some sophisticated algorithms in order to help people who are not just profoundly colorblind, but even just like ⁓ slightly colorblind to identify colors in the world. So you open up the app, you turn on a live camera and it adds striping to the color regions in the camera view. So the red apple has a certain kind of striping and if it's sitting next to Dan (22:00) Ahem. Christopher Noessel (22:25) a red fork, will have the same kind of striping. And my friend used that app in the morning to look at his ties, look at his carats or pocket squares or whatever. And that's how he was able to match them, literally holding them together and going, yep, can't tell the difference between these stripes. Pretty confident that this is going to work. So that's a favorite example. I'm super fond of the, I give an example in the no assist. I asked my dentist one day, can I see, cause he mentioned something about, are you still at IBM? I thought about that and it was like, you have hundreds of patients. There's no way that you could casually remember where I work. So I said, do you have something that helps you with that? And he confessed, yeah, we have a. patient management system that includes a little field where we can drop things. And so they add to that over time. the receptionist or the dental hygienist or the dentist can add to that field and then he can see a summary of it. The thing I love about this example is how bad that model was in summarizing the data behind it because it said some nonsensical things. So it serves two purposes. The first is in the book it says, yeah, I get how that could refresh someone's memory. but it also reminds us to be incredibly skeptical because these are probabilistic engines. Sometimes you're dealing with cheap models and what comes out of that thing is not 100 % trustworthy to say the least. So that example is one of my favorites. And if I had to pick a third. You know, I'll go with the Conmigo interface. If you don't have kids and you haven't experienced Khan Academy before, it's a business that is dedicated to helping kids learn, sort of like an extra tutor or they provide software tools. Well, I think it was in 2019, they released Khanmigo Academy, which is in a large language model that has a big system prompt that says, don't give answers, help users find answers. And it's just brilliant. So I hopped on there, I bought it for my kid, my older kid. He never really bonded to it as much as I had hoped. So I wound up canceling the subscription, but my daughter has just started to express an interest. So we may revise it for her. But when I got into Conmigo, I said, hey, what is 125X equals 25? What is that? And I was deliberately vague with that question because it's super inaccurate. And it could have come back and said, why? That's an algebraic equation. Let's talk about what those are. Or it could go and start to break things down for me and say, well, this is an algebraic algebraic equation. Here's how we would approach it. How do you think you should approach it in this case? And it's super awesome as an illustration of how software when it's in that mode in that let's learn, let's not do mode, it can really help. So I'm super fond of it. Now I'll list that one as my third favorite. Nik (25:10) That actually dovetails well into my other question, which was near the end of the book, you actually talk about systems that make users think, that really push users to think as part of their assistance. That's not everything in this book, right? There's lots of assistance that helps and does stuff for a user. But this idea that you are building something to actively make you think. This, however, is... I think a very hard thing to design, and I think it's hard for sometimes users to accept because you're, as the designer, as the service provider, especially if it's not a choice, you're explicitly making this choice of like, no, I'm gonna make you think about this. And you say in the book, users might not like it. I'm curious here, maybe without giving away the whole chapter, but like, what might you suggest to designers who might have this sense of like, ⁓ I, actually think you should think about this or you should verify or there's something more that I need you cognitively engaged. what advice might you give? Christopher Noessel (26:06) I wound up in my last couple of months at IBM so I was leading the design for AI Guild, big international thing. And we were trying to really focus a lot of attention on the bleeding edge, the cutting edge. But I was doing some due diligence polling of my product managers across the company. And I was like, what are your designers strengths and weaknesses as it applies to the product? And turns out that the fundamentals were a bigger. So I created this big workshop, four day, five day workshop, could squeeze it if I needed to, that taught the fundamentals from a model and algorithm perspective so that designers would be equipped. Well, the very end of it, I began to touch on some of these concepts because I was working in the book at the time. And I began to touch on some of these concepts like about cognitive forcing functions with its horrible naming and how people would use it. And after the first time I presented this, I was at a... studio, IBM studio in Kochi, India. One of the students got up and she came to me and she said, Chris, I have to say this. I said, go ahead. She said, this cannot be designed. And I was like, what? She was like, you said that users hate it. And I was like, yeah, they generally hate cognitive forcing functions. And she said, that's not what I'm here to do. I'm here to give delight to my users. Well, if you're listening, please forgive me. But I'm to call that the old model of thinking about what design does because usability is no longer enough, not for efficacy with AI, not across organizations, and certainly not when we pull out to the big, big picture of the world. Just being usable has gotten us into some problem spaces. So part one, I think pull your camera back. It's not just about an individual's interaction with a moment. it's a collection of interactions and micro interactions that add up to something. And we have to look at not just how pleased are they, but how efficacious, what's the good word? Not efficient. I think efficacious is probably it. It's a big academic word, but how efficacious are they in accomplishing what they need to accomplish with the software? And if you wind, up finding through research or the application of some of these patterns that, hey, they're more efficacious even though they hate it little more. You have a responsibility as a designer to say, okay, well, they're not in love with us, but they're doing what they need to do better. And that's a mark of maturity, I think, for designers that the advent of AI is going to force a reckoning with a lot of us and our stance about who we are in relation to our hypothetical and real users. Nik (28:35) This is interesting actually. Sorry, I did the thing where I say this is interesting because I say it all the time. I'm sorry. Chris, one of the connections that I'm making here, I'm not sure if this is actually where you're going with it, but one of the common ways in which we tune generative AI systems to create content that we like or that is preferred is through reinforcement learning through human feedback. And this is where people basically get to see a side by side, hey, here's the answer to what you asked or here's what we've generated. Do you like A or do you like B better? And then you can kind of go through that and over time, the systems can move more towards what people prefer. However, this has been an issue in what... may lead to like sycophancy because people are like, well, I like the thing that's always nice to me and says I'm amazing. But that's actually the funny thing is, is that even people recognize, but that's not what I want all the time. Like sometimes when I'm asking for like a serious thing, I need something to push back. And this idea that you have here of an outcome driven method, where as opposed to someone actually rating and doing the reinforcement learning on, okay, is this something that Christopher Noessel (29:19) Yeah. Nik (29:39) the user likes, but rather looking at the entire task that the user is doing, especially in a human AI sort of workflow, and then looking at the outcome and then rating the outcomes of that. I wonder if that's now something that designers and their teams are going to need to start thinking about if they want to be building technologies that really do help people. then thus, they can test their hypothesis of if I make people think more, do I actually? lead them to better outcomes or not. I mean, it's possible, maybe you don't, but that's really at the end of the day, from what I'm understanding here is like, at the end of the day is like, are you doing the task that you want to accomplish better? Dan (30:17) there's a metric problem with this, which is is sometimes very hard to determine whether you are being effective at something as opposed to being efficient at something. And I think industry and hashtag capitalism is all about Christopher Noessel (30:28) yeah. Dan (30:33) being efficient, doing something quickly. And if my designer or my employee doesn't learn anything is de-skilled. Well, that's not my problem. that's, that's their problem. Or the user gets de-skilled. That's even less my problem because then they're exactly that. I'm, you are now completely dependent on my system to do this. So I think. Christopher Noessel (30:49) That's to my benefit. Dan (30:57) There's this weird issue where they're just competing value systems here. it's almost like the product manager who's like, well, I'm going to juice these metrics for this quarter's gain, not caring that it destroys the user experience or long-term value of the product. And we are doing the same writ large when it comes to. press the AI button, have it do everything for you. Who cares if you don't, if A, the output sucks and B, you don't learn anything. It got there faster. It did the thing for you, done and done. I think, right. I think this notion of how do we throw speed bumps and stuff is, and. Christopher Noessel (31:36) Yeah, great. Early sales report. Dan (31:45) ⁓ forcing people to really think about choices is, while I 100 % agree that it is essential, is going to face the same pushback you got from the woman in India, Chris. Christopher Noessel (31:57) Oh, entirely. So first I wanna answer Nick's question, which is that question of what are we measuring to know that we're successful? And I think, yes, you are correct. Efficacy and happiness are both important, but I think it's an and, not an or. If users really seriously hated it, even if they're doing a good job, they're gonna resent it. You can't have that. If users are super pleased, Even if they're not doing as great a job, well, the client, the person paying for it is going to be like, yeah, we're not, we're not really fond of this. So there is some magic point where both things are true. But as I looked into the field of cognitive forcing functions, as well as writing this book, there's no magic bullet. There is not a version of the cognitive forcing function that users love and if you try and explain what the cognitive forcing function is doing, it's still people like, hey, this is still interrupting my work, it's interrupting my flow, and I don't like it. And even worse, identify the sort of fast thinkers and slow thinkers from Kahneman, although I call them something different in the book. And turns out that the people who are most prone to over-reliance, the fast thinkers, are the people who hate cognitive forcing functions the most and will bypass them the most. It's a really dark set of circumstances, but I believe that's the world we're in. So in short, my advice for that question is you have to track both. You have to find that magic point where both are as optimized as they can be because they will affect each other. But the second thing, which is what Dan had asked, I'm deeply aware of and worried about. Because I think, and I'm gonna go to one quick anecdote, which is that when we first, when I gathered a team within the Design for AI Guild and we began to pursue these questions, our first take on it was, hey, how do we not de-skill users? And after we had done our initial tests and showed our results, it was some other people wearing a business hat at IBM, they were like, yeah, but business leaders don't care. In fact, they want de-skilled workers. that gives much better leverage in labor relations. They can pay people less. They can fire people more, right? From a gross capital C capitalist point of view, that's what they want. And it was, and for a long time, I say about three, four months, I was super dispirited. I was sad because I was like, crap, all this work, I found this problem. I believe that it's a problem we need to solve, especially writ large, given how AI is getting slapped onto everything. And then I was like, and yet business leaders won't hear it. And it was then that a fellow inside of the research division of IBM who had heard my talk came to me, told me about overreliance, which is that second version that business leaders do pay attention to. that as you get de-skilled, you become less able to catch the lies and hallucinations coming out of an AI, and therefore your dependence is correlated to worse performance. That is something that every business leader should pay attention to, because if the problem is when the AI goes away, well, that's only like 0.01 % of the time. But if the problem is when the AI is up and helping, that's 99.99 % of the time, and that matters. I was so grateful to have gotten that conceptual pivot to reframe our findings for the audience. However, in the book, I don't shy away from either. If you're a human on the planet, you want to solve both. You want to solve the de-skilling problem for social reasons and individual reasons, and you know that you need to solve the over-reliance problem so that your ideas can be rubber stamped and get in the software. Dan (35:36) as a design pragmatist, I'm always trying to be like, well, how can I have it both ways? How can I do a little bit of this, a little bit of that? And with agents, I'm always trying to figure out where those meaningful decision points or meaningful break points are to bring human back in the loop. those could be teachable moments. Those could be moments of decision making. Those could be moments of keeping my skills high. But even that much can be a real problem for people. Cause they're like, well, I just want the thing to go do it. I don't want you to keep asking me these questions. Christopher Noessel (36:13) I think there are three things come to mind. The first is that we're in a period and I'm an old guy. So I'm aware that my kids are growing up as more AI natives than I am, that I did. I grew up AI ignorant except in science fiction. R2D2 was the AI that I was thinking about. But not so with my kids. And the increasing literacy that will accompany this technology, I think I trust that their generation will be better equipped because they'll grow up with this problem all the time. And it will just become part of how they engage with an AI. It's like, ⁓ you know what? I've asked you to do that and I realize I can't do it myself. I'm gonna take a moment. I'm gonna take control and say, need to learn this thing. But before then, and for old people like me, I think, and from a pragmatist point of view, When you were initially designing one of these interventions, which is the thing that I call the human goes first pattern, it's an intervention because you add it into an existing workflow that. You start with a educated first guess. When I worked with Maximo at IBM in order to design the Human Goes First pattern for their civic infrastructure product, we talked to a subject matter expert and said, well, we don't want this all the time. If all we ever gave them was a Conmigo, boy, howdy, that would not be worth it to our customers. So we know it only needs to occur occasionally. Like keep me working, keep me working, but occasionally. Help me sharpen my saw. And that occasionally was a big conversation to have. Ultimately, we landed with Maximo that what we would start is, a quarterly cadence. Quarterly, would interrupt their work on a particular, for a particular skill and interrupt them and say, okay, I know how you're normally used to doing it, but this time we're going to do it this way. And here's why. It also included a snooze button, by the way, that I talked about a little bit in the book. which is of course, if users are under the gun and the software doesn't understand that, they need to be able to say, not now, yes later. it makes sense to me that two things ought to be possible. The first is dynamic adjustment of that cadence. If a particular user in a particular team shows that they are struggling with a particular skill, it would be great if the software could prompt them more frequently. So it's personalized. And if you've got an elder who is a master of that, you don't need to ping them quite as often. It becomes a lot more annoying to them. If that cadence could be dynamically set, that's awesome. We didn't get to how to do it, so I have no advice for how to do that in the book. The last thing is because of that literacy thing, think designers ought to enable it to occur at any time. So if user is using a system and they're like, wait a minute, I want, I have a few extra minutes now. I'm not under the gun. Let's do that now. Let me go first. And the software should allow you to do it. I think, I hope. those would be the tools that we would get that occasionalness to find that magic cross point between efficacy and appropriate reliance, which is the antonym of over reliance. Dan (39:11) Human go first like either 60s robot speak or ancient caveman speak. Neanderthal speak because yeah they're talking to a Homo Sapiens Christopher Noessel (39:13) man. Right. them de-skilling you. Nik (39:24) Chris, we talked earlier on the show about Anthropix computer use and many of the big players are shipping out computer use. But I think that that's actually something that people are excited about, but also carries a bunch of risks. And this is what people are sort of building agents. I'm curious if you've got any thoughts for say our listeners, like if they're trying out computer use, maybe for their own design work, how much you make? a computer use agent may be more assistive for you so that you're using it in a way that is, responsible and less likely to get you in trouble, but while still basically supporting you. Christopher Noessel (39:57) In full disclosure, I have not given Claude permission to do that on my machine yet. And part of it is that, I'm aware of descaling in a real life. I don't want to commit that error. I would much rather have it say, go here, go there, go there, make me follow it and I can internalize it. But I also understand people may not want to do that. If I were to try to answer, how would I advise that users approach that question? I would say, hey, I've presented this five part model about how AI can assist users. Look and consider each one of those in turn. How are you using it to help you perceive, literally detect with your sensory organs, what you need to detect. How does it help you know or not know? And that could be very shallow. What's the name of this flower? Or it could be very deep, which is, hey, how does oxygen turn bridges into a risk? Right, those are sort of deeper concepts. It could be planning. And I think not a lot of work has been done on how AI, not enough work has been done on how AI helps like. draft scenario possibilities and then select which one to recommend. But boy, how do you can be done and it's a really fascinating field. It's helping me perform. I don't want you to do it, I want you to guide me. Let me know when I'm off the rails. And lastly, reflection I think is the most underutilized, the least utilized aspect of these universal assists. I can only come up with two examples in the book, but I think they're very powerful. But hey, help me think about both. What I have told you are my goals and the tactics I have used to get there. Do I need to change those tactics? Do I need to rethink those goals? Is this something that is good for me, my community, my people, and my world? Those are all things that I think an assistant can help you do and should help you do. Nik (41:41) All right, Chris, thank you so much for being on the show. Folks, if you wanna pick up a copy of Designing Assistant Technology, you can find that online, Rosenfeld Media. And if you use the code CMU25, Chris has provided that to get 20 % discount directly from Rosenfeld to pick up a copy of the book. Chris, thanks for being on the show and... We wish you the best as you head out into future things. Christopher Noessel (42:03) Thanks so much. Good talk to you guys. Dan (42:04) And thank you everyone for tuning into this week's episode of AI and Design. We'll see you next week.