Social Media

Sometimes I post thoughts on LinkedIn when I think a blog post might be too much.

Follow me on LinkedIn

Jared Peterson

Science, Strategy, and Training

Walter Mischel’s (of Marshmallow Experiment fame) controversial theory of personality (CAPS) has intrigued me for years. For Habit Weekly's new monthly journal club over on the slack I decided to dig in. Consider two claims: Jared is Agreeable Arizona is sunny Both of these are averages, traits, or tendencies. But they are not models. And by nature of not being models cannot forecast variability. i.e., You cannot forecast rain with a description as reductionist as “Arizona is sunny.” Yet, we can agree on some situations (context) where there will be variability. Arizona will not be sunny when a storm is passing through. Jared will not be agreeable if you disagree with him (just kidding). There are predictable deviations from the tendency. Or another way of saying it is that there is predictable variability between situations, and predictable consistency within situations. This is exactly what Mishel’s worked showed. For example, a child that is aggressive when warned might be below average on aggression when approached by a peer, and these situation specific behaviors are consistent over time. This context sensitivity IS personality according to Mischel, and trait-based approaches are fundamentally unable to capture it. Kahneman has called this a “scandal” because it showed the insufficiency of traits which must treat this predictable deviation as an error. But such variability is not an error, its a predictable part of personality. (I'm sure Kahneman saw a comparison to his own work) Rather than identifying all possible contexts that effect personality (impossible), Mischel’s Cognitive Affective System Theory of Personality (CAPS) instead focuses on how a person interprets (or construes) a situation. He and his co-author, Shoda, argue this "construal" transforms unfamiliar situations into familiar cues which then activate habits, and habits are then what really drive personality - both its average and its variation. It is a concise and plausible theory. I'm a fan. But there is an old saying, “all models are wrong, but some are useful,” and CAPS has a tragic flaw that is hard to get around: It is more right, but less useful. Modeling the process of how someone interprets and construes a situation is far more difficult than understanding average behavior with a survey, and so CAPS doesn't make behavior easier to predict. The comparison to Behavioral Science is hard to ignore. Humans tend to be loss averse, tend to be influenced by social norms, and tend to be impacted by choice architecture. But traits are fundamentally insufficient for predicting generalizability. Without a theory of how people interpret situations, our ability to predict generalizability is permanently shot. It is like trying to forecast when all you know is "Arizona is sunny." I plan on writing more about this in the future, but in the meantime check out references in the comments. #Personality #BehavioralScience #ChoiceArchitecture #Context #Psychology

Jared Peterson

Science, Strategy, and Training

1. What has helped you to be a better behavioral scientist 2. Isn't on the radar of most behavioral scientists My pick is "Games: Agency as Art" by Thi Nguyen At first glance, a book on philosophy of games doesn't seem related to behavioral science, but game designers are the pinnacle of their craft in designing for behavior. There is more to games than points and badges. Nguyen's core argument is that just as [paintings are the art of seeing] and [songs are the art of hearing] that [games are the art of agency]. That's vague, so let me make it concrete. Country songs crystallize the hominess of a small town, but Animal Crossing crystalizes the act of beautifying that town. Romance books crystallize the experience of love, but playing 'house' crystalizes the experience of adulthood for children all over the globe. Art crystallizes the experience of beauty, but chess crystalizes a mode of being: careful deliberation of moves and counter moves in the hopes of having a brilliant insight that brings everything to conclusion. In games we take a step outside of our normal way of thinking and acting, and out of our normal personality in order to experience a different way of being (or agency). And also, perhaps, to learn the *joy* and *beauty* of those alternative ways of being. Nguyen says when he started philosophy, he wasn't great at it because he didn't have the patience. It seemed tedious to think through all the nuances. We are all somewhere on the ADHD spectrum, and it perhaps seemed to him that he was too high on that spectrum to do the deep sort of engagement that philosophy required. But then he discovered Chess. Chess taught him how to enjoy a new mode of thought he had previously found tedious - the careful deliberation of moves and counter moves. Chess unlocked a new 'agency" for him that had previously been out of his grasp. A new facet of his personality he didn't know he was capable of. He learned to enjoy what he had previously found tedious I read this book while working on an e-commerce project where we had identified Choice Overload as a problem. But as I read Nguyen I started to ask myself how video game designers get away with the exhausting search for collectibles, or the tediousness of grinding for points. And it occurred to me that the problem wasn't the number of options on the site, but the mode of being (agency) we were forcing users into. If we ask users to engage in a methodical comparison of options, of course users will experience Choice Overload. But if we could find a way to crystallize the experience of exploration, then what had previously been tedious could instead be exciting. This insight eventually led to the article I wrote with Roos van Duijnhoven, MSc on Choice Overload. Nguyen's book is a treasure trove of insights for behavioral design, personality change, gamification, and the dangers of point systems. It's a book I wish would become part of the canon for behavioral scientists everywhere.

Jared Peterson

Science, Strategy, and Training

Tic Tac Toe is a solved game. That means that, if you know the right techniques, it is mathematically impossible to lose. This is opposed to Chess where we are unsure if there is a single optimal strategy. (edit: or as opposed to Poker where there cannot be a single optimal strategy because what is rational depends on a changing situation) The question I wish to ask is this; Is life a solved game? Is there a single reasoning strategy that, if you can apply it correctly, you will always come to the optimal and correct conclusion such that it is impossible to lose? I ask because I sometimes fear we Behavioral Scientists act like life is solved. In order to conclude that someone is exhibiting a bias, and therefore irrational, two things must be true. 1. The statistical or economic model which you are comparing the person to must actually give the correct answer if applied correctly (ie rationality is solved) 2. You as the researcher must know how to apply this strategy correctly The conclusion of these two premises is that you yourself are perfectly rational. You have solved life. Congratulations! Though you must forgive me for doubting. Yes, in some lab settings (which are often tic tac toe like in their simplicity) we can know the correct answer and so know someone is being irrational. But in the real world life is really messy, and it is unclear that economic and statistical reasoning is actually optimal. In fact, it's not hard to find settings where these models will lead you astray, such as in non-ergodic settings, or in domains with unknown unknowns. This is why I'm hesitant to talk about biases outside of very controlled lab settings. I am not confident in the models, nor in our ability to apply them correctly and so am constantly unsure whether the bias is actually there, and even if it is, whether that makes it irrational. We can show that a heuristic is present, but need to be more cautious about declaring that heuristic to be a bias. Sometimes heuristics give the same answer as a rational model, and sometimes heuristics are more optimal than a rational model. Conclusion: We have not solved life, and so should be more humble in our declarations of irrationality. Not because we can never be more rational, but because there should always be uncertainty that we understand enough to declare ourselves more rational. Especially when the subjects of our study are in the context and we merely observe it.

Jared Peterson

Science, Strategy, and Training

I wonder if it's better to think of Kahneman and Tversky's (K&T) work as a reaction to behaviorism than as a reaction to economics. Yes, the Cognitive Revolution happened before they did their most famous work, but psychology was still recovering from Behaviorism's hold. Behaviorism said that minds were black boxes and we shouldn't even try to understand what was going on inside them because it was impossible. K&T blew that belief out of the water. When these two compared human decision making to models of "rational" decision-making, they found humans systematically came to different answers. And it dawned on them - if they could find a rule that led to those deviations, then that rule was likely the same one that humans were using to reason. It was an impressive insight. They had found a way to infer what was happening inside the black box that is the mind. Not out of an interest in proving that humans were stupid, but as a methodology for studying cognition. (The idea of studying deviations from models was developed by others first, but they were the first to apply it in this particular way.) Thus the Heuristics and Biases paradigm was born. The approach was to identify systematic "biases" (deviations) from economic, statistical and similar models, and then hypothesize "heuristics" (rules) which could explain why humans came to a different answer than those models. As a methodology for uncovering principles of human cognition, I fully endorse the Heuristics and Biases paradigm. It's a great approach. I think the heuristics that have been identified are in large part correct descriptions of how humans actually reason. (Or at least how they reason in lab settings with stimuli they are unfamiliar with) (It's only once we start concluding things about rationality, or start overly generalizing the findings to non-lab settings, that I get uneasy)

Jared Peterson

Science, Strategy, and Training

Humans are not hopeless, helpless, or totally irrational. By understanding human potential, the wicked problems of human behavior can be transformed into actionable strategies. 🔗 This the central claim on my newly launched website: www.behaviorchange.expert My goal with this site is to clarify what I do, and to bridge connections not just with me, but also with two remarkable (though very different) organizations I’m associated with: 🌊 Nuance Behavior - An initiative I co-founded that's focused on digital behavior change. We tackle complex challenges by illuminating the underlying psychology, and developing strategies and frameworks for changing behavior across the entire user journey. 🧊 ShadowBox LLC - Where we research and train decision-making as well as intuitive and perceptual expertise. At ShadowBox, we focus on high stake domains characterized by uncertainty, time pressure and complexity such as medicine, law enforcement, child welfare, and many other domains. These may seem very different, but I continually find there is much overlap. The principles of expert decision-making and motivation are complementary and can help clarify the goals and strategies, and produce meaningful tactics, for any project that involves humans (i.e., all projects). 📢 If you are interested in research, strategy, or training, either within your company or in some target population, please schedule a call! We can help you uncover the underlying psychology, improve decision-making, and shift behavior in your project.

Jared Peterson

Science, Strategy, and Training

Every academic eventually reaches the Taco Bell stage of their career. Just like how everything on the menu at Taco Bell consists of 10 ingredients mixed and matched in different ways, academics will reach a point where they use the same 10 anecdotes and talking points for every presentation they give. Why? Because after decades of studying these principles in depth, they know the points inside and out and can wield them to answer any question or challenge thrown at them. Every question is an order for which they have the ingredients. For example, I don’t care what the question is that you ask Richard Thaler, I guarantee you he can answer it by talking about cashews, urinal flies, and organ donation. Knowledge of that depth doesn’t come from reading every book, listening to every podcast, and remembering every bias. It comes from engaging with a few ideas deeply. Outside of an academic context, this sort of depth is hard to get. So my recommendation to those outside of academia is this: find a group of peers and debate topics in your field. Since 2016 I have been on an online forum with friends where we talk and debate politics, philosophy, science, entertainment, family, and everything else under the sun. Having a group of intelligent friends with whom I can talk to about these topics has been one of the most important intellectual investments I have made outside of college. They force me to think and write, they build on my thoughts, introduce me to new ideas and thinkers, ask novel questions I never thought of, and encourage me to keep progressing. It is certainly more motivating than online school, and I would argue just about as useful. If you ask me to write a paper about a topic, it is likely one of my first moves will be to go to the forum and search for the times we have talked about it. There my thoughts are written down in detail, with links to other threads where we went over similar topics. Often there will even be counterarguments, and counter-counterarguments. It is as valuable as any note taking software, but with the added benefit of people who think differently. And because of this, there are hundred topics for which I have my 10 ingredients, hundreds of topics for which I have gotten to the Taco Bell Stage. There are of course a thousand other things you should be doing to keep up with your field. But I would argue finding a good group of peers should put this one near the top of your list. It brings depth, breadth, and motivation. It can be as powerful as a graduate program in driving your learning if for no other reason than having a good group of friends is incredibly motivating and keeps you on your toes. You will become a veritable Taco Bell of insights as you debate topics and introduce each other to new ideas. And if you keep it up long enough, maybe one day you’ll even be a Cheesecake Factory. If you do decide to pursue this idea, let me know. I may be interested in joining your group. :)

Jared Peterson

Science, Strategy, and Training

I have a somewhat controversial take that Behavioral Economics has inadvertently contributed more to reviving and legitimizing the concept of Homo Economicus than any field outside of traditional economics. If it hadn't been for the emergence of Behavioral Economics, psychologists might have largely ignored economists' 'as-if' models of human behavior. Yet, ironically, within Behavioral Economics the prevailing model of human behavior could be called "Homo Economicus Plus." The existence of biases legitimizes the model by allowing researchers to add biases to the end of economic models as if they were error terms. This in turn transforms these hypothetical models into seemingly descriptive ones, now ostensibly validated by psychological research. Behavioral Economics seems to me a significant obstacle preventing economists from taking cognitive science more seriously. Biases are like epicycles, enabling economists to cling to as-if models of human thinking, but now under a veneer of psychological legitimacy. By saying this I don't mean to say Behavioral Economics doesn't have good psychology. But rather what I mean to say is that Humans are far more complex and intriguing than merely "Homo Economicus Plus Biases." We develop intuitive theories, weave narratives, and make judgments rooted in our perceptions of right and wrong. A robust approach to Behavioral Economics should integrate these rich dimensions of human experience. Yet, these aspects are often absent in discussions about Behavioral Economics in favor of biases. But we can't just add error terms to the end of rational models and call it good psychology. If like me you are interested in what Behavioral Economics could look like if it moved beyond biases, I will recommend two papers by Samuel Johnson. Toward a Cognitive Science of Markets: Economic Agents as Sense-Makers Conviction Narrative Theory: A Theory of Choice Under Radical Uncertainty These papers represent, for me, what the future of Behavioral Economics could be if it took cognitive science more seriously.

Jared Peterson

Science, Strategy, and Training

Is 'Nudge' ethical? I have two answers to this question The first is that I don't think it is a meaningful question because the word Nudge is not really all that meaningful. Any change to the context around a decision which might change behavior (eg clarifying options, forcing a choice, using bold font, adding reviews) could be considered a 'Nudge.' So to declare all Nudges unethical would be tantamount to saying that all presented choices are unethical. My second response is this: The US Supreme Court has determined that it is UNETHICAL for an organization to use freedom of choice as an excuse to not consider the well being of their employees, and so nudges must be ethical at least some of the time. In Hughes v. Northwestern University, Northwestern had offered 400+ retirement and savings plans, many of which were terrible, and had unreasonably high fees. Northwestern argued that employees are free to choose whatever they want, and they had no responsibility to help or influence the decisions of their employees. In a unanimous decision the Supreme Court disagreed, arguing that Northwestern had fiduciary obligations to their employees, and could not leave the entire burden of choice on their employees. If someone has a ethical criticism of a SPECIFIC nudge, I am all ears. I think Behavioral Scientists should always be questioning what is going on in the field. But when people criticize Nudge as a concept, I find it hard to treat the criticism seriously.

Jared Peterson

Science, Strategy, and Training

Nudges have been dismissed as unethical and ineffective. So here’s some suggestions on how to save us from Nudge Theory. Put the carrots in a safety deposit back In one study, cafeteria goers were nudged to put carrots on their plate, which they did. But they did not eat them. What a waste! The solution: Let’s put the carrots in a safety deposit box that can only be opened with a security code provided by the cafeteria staff. Workers are only permitted to provide the code if someone explicitly asks for carrots. After all, if we are not 100% sure that someone is going to eat the carrots on their plate, then we should probably prevent them from putting the carrots on their plates in the first place. Otherwise that’s just food waste! Use the tried and true tactics of writing notices in small font legalese Governments are good at one thing, and one thing only - writing documents that no one understands. So why are we Behavioral Scientists trying to prevent them from doing what they are best at? Avoid changing too many things When running a “Nudge” experiment, you can only change one small thing, or else it will be unclear what caused the change. As a result, behavioral research often test one teeny tiny little intervention, like changing the subject line on an email. These small changes, unsurprisingly, have small effects. As a practitioner I usually don’t limit myself to making one small change, but perhaps the critics have a point. If one small change only has a tiny impact, why should I expect many changes to have a larger impact? If one change in a controlled experiment has a small impact, then we should only logically assume lots of small changes will not have a larger impact. That’s just common sense. If notices don’t work, let’s use fines and jail ‘Nuff said. Provide no defaults We might harm people if they sign up for the wrong number of times a week to go to the gym. So let’s give them the freedom to choose every possible option: I never understood combinatorics very well, but I’m sure gym goers will appreciate choosing every possible combination of the 365 days of the year. Make voting more difficult In one study, people who were “nudged” into choosing a specific plant took less good care of the plant, and the plant died sooner. Well, shouldn’t we take our democracy at least as seriously as a plant? I propose a plan to make voting more difficult. That way the only people who vote are the people who really truly care - the fanatics. --- This post is partly inspired by responses to a recent WSJ article. The article itself makes a good point that every Behavioral Scientist appreciates - single nudges don’t create long-term behavior change. But some have reacted as if the article meant nudges were useless. Nudge isn’t some special type of manipulation, it is just strategy and design informed by psychology (which is why the term is misleading in my opinion).

Jared Peterson

Science, Strategy, and Training

Last week, I defended the ethics of Nudge. Today, I feel it's my duty to highlight some things I consider unethical. Offender number 1: Microsoft Microsoft's persistent and overbearing push to get users to use OneDrive is over the top. Personally, I've only used OneDrive by accident. Occasionally, I lose files for weeks because Microsoft automatically saves them to OneDrive without me realizing it Offender number 2: Duolingo Duolingo showcases the difference between "Little e" and "Big E" behavior change ('e/E' stands for engagement). While Duolingo is effective at keeping users engaged for hours, it falls short in helping users achieve fluency in a language. A particularly emblematic feature is the ability to buy back a lost streak. It seems that Duolingo is often better at getting users to care about streaks than about the language they are learning, and Duolingo makes sure to capitalize on that. Offender number 3: Default Organ Donations Even Thaler and Sunstein, who popularized this famous case study, acknowledged that defaults in organ donation might be unethical. Follow-up studies show that defaults don't work well in this context anyway due to social and policy factors at hospitals. If it's unethical and ineffective, why does it remain a prominent example in the field? (Or perhaps it’s ethical because it doesn’t influence people anyway? 🤔) Takeaway: Nudge is a meaningless term because you can't not nudge, and so the ethics of behavior change should be a constant consideration in product and marketing design rather than something to consider only on special occasions. Organizations must choose defaults, advertise, make money, and present choices—all of which involve ethical considerations, regardless of whether these choices are made with the concept of "Nudge" in mind. We can't excuse our design choices through "revealed preference" or by claiming we're just designing for aesthetics (as if aesthetics doesn't 'nudge'). Every design choice has ethical implications, and so we must address these implications as just a regular part of the work-flow rather than treating the ethics of behavior change as a special case for "Nudges." Another consideration is that we often sell ourselves on the value of the thing we are trying to accomplish, which often makes us blind to the ethics of what we are doing. This is where I think Behavioral Scientists are most likely to get into hot water. It is always important to take into account how our ethics and values differ from others. You will never please everyone, but if you find yourself recommending design choices that would result in scorn from someone, it is worth thinking through whether that scorn might have any merit. hashtag#Ethics hashtag#BehaviorChange hashtag#Nudge hashtag#ProductDesign hashtag#Marketing hashtag#Microsoft hashtag#Duolingo hashtag#OrganDonation hashtag#DesignChoices

Jared Peterson

Science, Strategy, and Training

Nudges have been dismissed as unethical and ineffective. So here’s some suggestions on how to save us from Nudge Theory. Put the carrots in a safety deposit back In one study, cafeteria goers were nudged to put carrots on their plate, which they did. But they did not eat them. What a waste! The solution: Let’s put the carrots in a safety deposit box that can only be opened with a security code provided by the cafeteria staff. Workers are only permitted to provide the code if someone explicitly asks for carrots. After all, if we are not 100% sure that someone is going to eat the carrots on their plate, then we should probably prevent them from putting the carrots on their plates in the first place. Otherwise that’s just food waste! Use the tried and true tactics of writing notices in small font legalese Governments are good at one thing, and one thing only - writing documents that no one understands. So why are we Behavioral Scientists trying to prevent them from doing what they are best at? Avoid changing too many things When running a “Nudge” experiment, you can only change one small thing, or else it will be unclear what caused the change. As a result, behavioral research often test one teeny tiny little intervention, like changing the subject line on an email. These small changes, unsurprisingly, have small effects. As a practitioner I usually don’t limit myself to making one small change, but perhaps the critics have a point. If one small change only has a tiny impact, why should I expect many changes to have a larger impact? If one change in a controlled experiment has a small impact, then we should only logically assume lots of small changes will not have a larger impact. That’s just common sense. If notices don’t work, let’s use fines and jail ‘Nuff said. Provide no defaults We might harm people if they sign up for the wrong number of times a week to go to the gym. So let’s give them the freedom to choose every possible option: I never understood combinatorics very well, but I’m sure gym goers will appreciate choosing every possible combination of the 365 days of the year. Make voting more difficult In one study, people who were “nudged” into choosing a specific plant took less good care of the plant, and the plant died sooner. Well, shouldn’t we take our democracy at least as seriously as a plant? I propose a plan to make voting more difficult. That way the only people who vote are the people who really truly care - the fanatics. --- This post is partly inspired by responses to a recent WSJ article. The article itself makes a good point that every Behavioral Scientist appreciates - single nudges don’t create long-term behavior change. But some have reacted as if the article meant nudges were useless. Nudge isn’t some special type of manipulation, it is just strategy and design informed by psychology (which is why the term is misleading in my opinion).

Jared Peterson

Science, Strategy, and Training

I have a somewhat controversial take that Behavioral Economics has inadvertently contributed more to reviving and legitimizing the concept of Homo Economicus than any field outside of traditional economics. If it hadn't been for the emergence of Behavioral Economics, psychologists might have largely ignored economists' 'as-if' models of human behavior. Yet, ironically, within Behavioral Economics the prevailing model of human behavior could be called "Homo Economicus Plus." The existence of biases legitimizes the model by allowing researchers to add biases to the end of economic models as if they were error terms. This in turn transforms these hypothetical models into seemingly descriptive ones, now ostensibly validated by psychological research. Behavioral Economics seems to me a significant obstacle preventing economists from taking cognitive science more seriously. Biases are like epicycles, enabling economists to cling to as-if models of human thinking, but now under a veneer of psychological legitimacy. By saying this I don't mean to say Behavioral Economics doesn't have good psychology. But rather what I mean to say is that Humans are far more complex and intriguing than merely "Homo Economicus Plus Biases." We develop intuitive theories, weave narratives, and make judgments rooted in our perceptions of right and wrong. A robust approach to Behavioral Economics should integrate these rich dimensions of human experience. Yet, these aspects are often absent in discussions about Behavioral Economics in favor of biases. But we can't just add error terms to the end of rational models and call it good psychology. If like me you are interested in what Behavioral Economics could look like if it moved beyond biases, I will recommend two papers by Samuel Johnson. Toward a Cognitive Science of Markets: Economic Agents as Sense-Makers Conviction Narrative Theory: A Theory of Choice Under Radical Uncertainty These papers represent, for me, what the future of Behavioral Economics could be if it took cognitive science more seriously.

Jared Peterson

Science, Strategy, and Training

Walter Mischel’s (of Marshmallow Experiment fame) controversial theory of personality (CAPS) has intrigued me for years. For Habit Weekly's new monthly journal club over on the slack I decided to dig in. Consider two claims: Jared is Agreeable Arizona is sunny Both of these are averages, traits, or tendencies. But they are not models. And by nature of not being models cannot forecast variability. i.e., You cannot forecast rain with a description as reductionist as “Arizona is sunny.” Yet, we can agree on some situations (context) where there will be variability. Arizona will not be sunny when a storm is passing through. Jared will not be agreeable if you disagree with him (just kidding). There are predictable deviations from the tendency. Or another way of saying it is that there is predictable variability between situations, and predictable consistency within situations. This is exactly what Mishel’s worked showed. For example, a child that is aggressive when warned might be below average on aggression when approached by a peer, and these situation specific behaviors are consistent over time. This context sensitivity IS personality according to Mischel, and trait-based approaches are fundamentally unable to capture it. Kahneman has called this a “scandal” because it showed the insufficiency of traits which must treat this predictable deviation as an error. But such variability is not an error, its a predictable part of personality. (I'm sure Kahneman saw a comparison to his own work) Rather than identifying all possible contexts that effect personality (impossible), Mischel’s Cognitive Affective System Theory of Personality (CAPS) instead focuses on how a person interprets (or construes) a situation. He and his co-author, Shoda, argue this "construal" transforms unfamiliar situations into familiar cues which then activate habits, and habits are then what really drive personality - both its average and its variation. It is a concise and plausible theory. I'm a fan. But there is an old saying, “all models are wrong, but some are useful,” and CAPS has a tragic flaw that is hard to get around: It is more right, but less useful. Modeling the process of how someone interprets and construes a situation is far more difficult than understanding average behavior with a survey, and so CAPS doesn't make behavior easier to predict. The comparison to Behavioral Science is hard to ignore. Humans tend to be loss averse, tend to be influenced by social norms, and tend to be impacted by choice architecture. But traits are fundamentally insufficient for predicting generalizability. Without a theory of how people interpret situations, our ability to predict generalizability is permanently shot. It is like trying to forecast when all you know is "Arizona is sunny." I plan on writing more about this in the future, but in the meantime check out references in the comments. #Personality #BehavioralScience #ChoiceArchitecture #Context #Psychology

Jared Peterson

Science, Strategy, and Training

1. What has helped you to be a better behavioral scientist 2. Isn't on the radar of most behavioral scientists My pick is "Games: Agency as Art" by Thi Nguyen At first glance, a book on philosophy of games doesn't seem related to behavioral science, but game designers are the pinnacle of their craft in designing for behavior. There is more to games than points and badges. Nguyen's core argument is that just as [paintings are the art of seeing] and [songs are the art of hearing] that [games are the art of agency]. That's vague, so let me make it concrete. Country songs crystallize the hominess of a small town, but Animal Crossing crystalizes the act of beautifying that town. Romance books crystallize the experience of love, but playing 'house' crystalizes the experience of adulthood for children all over the globe. Art crystallizes the experience of beauty, but chess crystalizes a mode of being: careful deliberation of moves and counter moves in the hopes of having a brilliant insight that brings everything to conclusion. In games we take a step outside of our normal way of thinking and acting, and out of our normal personality in order to experience a different way of being (or agency). And also, perhaps, to learn the *joy* and *beauty* of those alternative ways of being. Nguyen says when he started philosophy, he wasn't great at it because he didn't have the patience. It seemed tedious to think through all the nuances. We are all somewhere on the ADHD spectrum, and it perhaps seemed to him that he was too high on that spectrum to do the deep sort of engagement that philosophy required. But then he discovered Chess. Chess taught him how to enjoy a new mode of thought he had previously found tedious - the careful deliberation of moves and counter moves. Chess unlocked a new 'agency" for him that had previously been out of his grasp. A new facet of his personality he didn't know he was capable of. He learned to enjoy what he had previously found tedious I read this book while working on an e-commerce project where we had identified Choice Overload as a problem. But as I read Nguyen I started to ask myself how video game designers get away with the exhausting search for collectibles, or the tediousness of grinding for points. And it occurred to me that the problem wasn't the number of options on the site, but the mode of being (agency) we were forcing users into. If we ask users to engage in a methodical comparison of options, of course users will experience Choice Overload. But if we could find a way to crystallize the experience of exploration, then what had previously been tedious could instead be exciting. This insight eventually led to the article I wrote with Roos van Duijnhoven, MSc on Choice Overload. Nguyen's book is a treasure trove of insights for behavioral design, personality change, gamification, and the dangers of point systems. It's a book I wish would become part of the canon for behavioral scientists everywhere.

Jared Peterson

Science, Strategy, and Training

Tic Tac Toe is a solved game. That means that, if you know the right techniques, it is mathematically impossible to lose. This is opposed to Chess where we are unsure if there is a single optimal strategy. (edit: or as opposed to Poker where there cannot be a single optimal strategy because what is rational depends on a changing situation) The question I wish to ask is this; Is life a solved game? Is there a single reasoning strategy that, if you can apply it correctly, you will always come to the optimal and correct conclusion such that it is impossible to lose? I ask because I sometimes fear we Behavioral Scientists act like life is solved. In order to conclude that someone is exhibiting a bias, and therefore irrational, two things must be true. 1. The statistical or economic model which you are comparing the person to must actually give the correct answer if applied correctly (ie rationality is solved) 2. You as the researcher must know how to apply this strategy correctly The conclusion of these two premises is that you yourself are perfectly rational. You have solved life. Congratulations! Though you must forgive me for doubting. Yes, in some lab settings (which are often tic tac toe like in their simplicity) we can know the correct answer and so know someone is being irrational. But in the real world life is really messy, and it is unclear that economic and statistical reasoning is actually optimal. In fact, it's not hard to find settings where these models will lead you astray, such as in non-ergodic settings, or in domains with unknown unknowns. This is why I'm hesitant to talk about biases outside of very controlled lab settings. I am not confident in the models, nor in our ability to apply them correctly and so am constantly unsure whether the bias is actually there, and even if it is, whether that makes it irrational. We can show that a heuristic is present, but need to be more cautious about declaring that heuristic to be a bias. Sometimes heuristics give the same answer as a rational model, and sometimes heuristics are more optimal than a rational model. Conclusion: We have not solved life, and so should be more humble in our declarations of irrationality. Not because we can never be more rational, but because there should always be uncertainty that we understand enough to declare ourselves more rational. Especially when the subjects of our study are in the context and we merely observe it.

Jared Peterson

Science, Strategy, and Training

I wonder if it's better to think of Kahneman and Tversky's (K&T) work as a reaction to behaviorism than as a reaction to economics. Yes, the Cognitive Revolution happened before they did their most famous work, but psychology was still recovering from Behaviorism's hold. Behaviorism said that minds were black boxes and we shouldn't even try to understand what was going on inside them because it was impossible. K&T blew that belief out of the water. When these two compared human decision making to models of "rational" decision-making, they found humans systematically came to different answers. And it dawned on them - if they could find a rule that led to those deviations, then that rule was likely the same one that humans were using to reason. It was an impressive insight. They had found a way to infer what was happening inside the black box that is the mind. Not out of an interest in proving that humans were stupid, but as a methodology for studying cognition. (The idea of studying deviations from models was developed by others first, but they were the first to apply it in this particular way.) Thus the Heuristics and Biases paradigm was born. The approach was to identify systematic "biases" (deviations) from economic, statistical and similar models, and then hypothesize "heuristics" (rules) which could explain why humans came to a different answer than those models. As a methodology for uncovering principles of human cognition, I fully endorse the Heuristics and Biases paradigm. It's a great approach. I think the heuristics that have been identified are in large part correct descriptions of how humans actually reason. (Or at least how they reason in lab settings with stimuli they are unfamiliar with) (It's only once we start concluding things about rationality, or start overly generalizing the findings to non-lab settings, that I get uneasy)

Jared Peterson

Science, Strategy, and Training

Humans are not hopeless, helpless, or totally irrational. By understanding human potential, the wicked problems of human behavior can be transformed into actionable strategies. 🔗 This the central claim on my newly launched website: www.behaviorchange.expert My goal with this site is to clarify what I do, and to bridge connections not just with me, but also with two remarkable (though very different) organizations I’m associated with: 🌊 Nuance Behavior - An initiative I co-founded that's focused on digital behavior change. We tackle complex challenges by illuminating the underlying psychology, and developing strategies and frameworks for changing behavior across the entire user journey. 🧊 ShadowBox LLC - Where we research and train decision-making as well as intuitive and perceptual expertise. At ShadowBox, we focus on high stake domains characterized by uncertainty, time pressure and complexity such as medicine, law enforcement, child welfare, and many other domains. These may seem very different, but I continually find there is much overlap. The principles of expert decision-making and motivation are complementary and can help clarify the goals and strategies, and produce meaningful tactics, for any project that involves humans (i.e., all projects). 📢 If you are interested in research, strategy, or training, either within your company or in some target population, please schedule a call! We can help you uncover the underlying psychology, improve decision-making, and shift behavior in your project.

Jared Peterson

Science, Strategy, and Training

Every academic eventually reaches the Taco Bell stage of their career. Just like how everything on the menu at Taco Bell consists of 10 ingredients mixed and matched in different ways, academics will reach a point where they use the same 10 anecdotes and talking points for every presentation they give. Why? Because after decades of studying these principles in depth, they know the points inside and out and can wield them to answer any question or challenge thrown at them. Every question is an order for which they have the ingredients. For example, I don’t care what the question is that you ask Richard Thaler, I guarantee you he can answer it by talking about cashews, urinal flies, and organ donation. Knowledge of that depth doesn’t come from reading every book, listening to every podcast, and remembering every bias. It comes from engaging with a few ideas deeply. Outside of an academic context, this sort of depth is hard to get. So my recommendation to those outside of academia is this: find a group of peers and debate topics in your field. Since 2016 I have been on an online forum with friends where we talk and debate politics, philosophy, science, entertainment, family, and everything else under the sun. Having a group of intelligent friends with whom I can talk to about these topics has been one of the most important intellectual investments I have made outside of college. They force me to think and write, they build on my thoughts, introduce me to new ideas and thinkers, ask novel questions I never thought of, and encourage me to keep progressing. It is certainly more motivating than online school, and I would argue just about as useful. If you ask me to write a paper about a topic, it is likely one of my first moves will be to go to the forum and search for the times we have talked about it. There my thoughts are written down in detail, with links to other threads where we went over similar topics. Often there will even be counterarguments, and counter-counterarguments. It is as valuable as any note taking software, but with the added benefit of people who think differently. And because of this, there are hundred topics for which I have my 10 ingredients, hundreds of topics for which I have gotten to the Taco Bell Stage. There are of course a thousand other things you should be doing to keep up with your field. But I would argue finding a good group of peers should put this one near the top of your list. It brings depth, breadth, and motivation. It can be as powerful as a graduate program in driving your learning if for no other reason than having a good group of friends is incredibly motivating and keeps you on your toes. You will become a veritable Taco Bell of insights as you debate topics and introduce each other to new ideas. And if you keep it up long enough, maybe one day you’ll even be a Cheesecake Factory. If you do decide to pursue this idea, let me know. I may be interested in joining your group. :)

Jared Peterson

Science, Strategy, and Training

Is 'Nudge' ethical? I have two answers to this question The first is that I don't think it is a meaningful question because the word Nudge is not really all that meaningful. Any change to the context around a decision which might change behavior (eg clarifying options, forcing a choice, using bold font, adding reviews) could be considered a 'Nudge.' So to declare all Nudges unethical would be tantamount to saying that all presented choices are unethical. My second response is this: The US Supreme Court has determined that it is UNETHICAL for an organization to use freedom of choice as an excuse to not consider the well being of their employees, and so nudges must be ethical at least some of the time. In Hughes v. Northwestern University, Northwestern had offered 400+ retirement and savings plans, many of which were terrible, and had unreasonably high fees. Northwestern argued that employees are free to choose whatever they want, and they had no responsibility to help or influence the decisions of their employees. In a unanimous decision the Supreme Court disagreed, arguing that Northwestern had fiduciary obligations to their employees, and could not leave the entire burden of choice on their employees. If someone has a ethical criticism of a SPECIFIC nudge, I am all ears. I think Behavioral Scientists should always be questioning what is going on in the field. But when people criticize Nudge as a concept, I find it hard to treat the criticism seriously.

Jared Peterson

Science, Strategy, and Training

Last week, I defended the ethics of Nudge. Today, I feel it's my duty to highlight some things I consider unethical. Offender number 1: Microsoft Microsoft's persistent and overbearing push to get users to use OneDrive is over the top. Personally, I've only used OneDrive by accident. Occasionally, I lose files for weeks because Microsoft automatically saves them to OneDrive without me realizing it Offender number 2: Duolingo Duolingo showcases the difference between "Little e" and "Big E" behavior change ('e/E' stands for engagement). While Duolingo is effective at keeping users engaged for hours, it falls short in helping users achieve fluency in a language. A particularly emblematic feature is the ability to buy back a lost streak. It seems that Duolingo is often better at getting users to care about streaks than about the language they are learning, and Duolingo makes sure to capitalize on that. Offender number 3: Default Organ Donations Even Thaler and Sunstein, who popularized this famous case study, acknowledged that defaults in organ donation might be unethical. Follow-up studies show that defaults don't work well in this context anyway due to social and policy factors at hospitals. If it's unethical and ineffective, why does it remain a prominent example in the field? (Or perhaps it’s ethical because it doesn’t influence people anyway? 🤔) Takeaway: Nudge is a meaningless term because you can't not nudge, and so the ethics of behavior change should be a constant consideration in product and marketing design rather than something to consider only on special occasions. Organizations must choose defaults, advertise, make money, and present choices—all of which involve ethical considerations, regardless of whether these choices are made with the concept of "Nudge" in mind. We can't excuse our design choices through "revealed preference" or by claiming we're just designing for aesthetics (as if aesthetics doesn't 'nudge'). Every design choice has ethical implications, and so we must address these implications as just a regular part of the work-flow rather than treating the ethics of behavior change as a special case for "Nudges." Another consideration is that we often sell ourselves on the value of the thing we are trying to accomplish, which often makes us blind to the ethics of what we are doing. This is where I think Behavioral Scientists are most likely to get into hot water. It is always important to take into account how our ethics and values differ from others. You will never please everyone, but if you find yourself recommending design choices that would result in scorn from someone, it is worth thinking through whether that scorn might have any merit. hashtag#Ethics hashtag#BehaviorChange hashtag#Nudge hashtag#ProductDesign hashtag#Marketing hashtag#Microsoft hashtag#Duolingo hashtag#OrganDonation hashtag#DesignChoices

Jared Peterson

Science, Strategy, and Training

Nudges have been dismissed as unethical and ineffective. So here’s some suggestions on how to save us from Nudge Theory. Put the carrots in a safety deposit back In one study, cafeteria goers were nudged to put carrots on their plate, which they did. But they did not eat them. What a waste! The solution: Let’s put the carrots in a safety deposit box that can only be opened with a security code provided by the cafeteria staff. Workers are only permitted to provide the code if someone explicitly asks for carrots. After all, if we are not 100% sure that someone is going to eat the carrots on their plate, then we should probably prevent them from putting the carrots on their plates in the first place. Otherwise that’s just food waste! Use the tried and true tactics of writing notices in small font legalese Governments are good at one thing, and one thing only - writing documents that no one understands. So why are we Behavioral Scientists trying to prevent them from doing what they are best at? Avoid changing too many things When running a “Nudge” experiment, you can only change one small thing, or else it will be unclear what caused the change. As a result, behavioral research often test one teeny tiny little intervention, like changing the subject line on an email. These small changes, unsurprisingly, have small effects. As a practitioner I usually don’t limit myself to making one small change, but perhaps the critics have a point. If one small change only has a tiny impact, why should I expect many changes to have a larger impact? If one change in a controlled experiment has a small impact, then we should only logically assume lots of small changes will not have a larger impact. That’s just common sense. If notices don’t work, let’s use fines and jail ‘Nuff said. Provide no defaults We might harm people if they sign up for the wrong number of times a week to go to the gym. So let’s give them the freedom to choose every possible option: I never understood combinatorics very well, but I’m sure gym goers will appreciate choosing every possible combination of the 365 days of the year. Make voting more difficult In one study, people who were “nudged” into choosing a specific plant took less good care of the plant, and the plant died sooner. Well, shouldn’t we take our democracy at least as seriously as a plant? I propose a plan to make voting more difficult. That way the only people who vote are the people who really truly care - the fanatics. --- This post is partly inspired by responses to a recent WSJ article. The article itself makes a good point that every Behavioral Scientist appreciates - single nudges don’t create long-term behavior change. But some have reacted as if the article meant nudges were useless. Nudge isn’t some special type of manipulation, it is just strategy and design informed by psychology (which is why the term is misleading in my opinion).

Jared Peterson

Science, Strategy, and Training

I have a somewhat controversial take that Behavioral Economics has inadvertently contributed more to reviving and legitimizing the concept of Homo Economicus than any field outside of traditional economics. If it hadn't been for the emergence of Behavioral Economics, psychologists might have largely ignored economists' 'as-if' models of human behavior. Yet, ironically, within Behavioral Economics the prevailing model of human behavior could be called "Homo Economicus Plus." The existence of biases legitimizes the model by allowing researchers to add biases to the end of economic models as if they were error terms. This in turn transforms these hypothetical models into seemingly descriptive ones, now ostensibly validated by psychological research. Behavioral Economics seems to me a significant obstacle preventing economists from taking cognitive science more seriously. Biases are like epicycles, enabling economists to cling to as-if models of human thinking, but now under a veneer of psychological legitimacy. By saying this I don't mean to say Behavioral Economics doesn't have good psychology. But rather what I mean to say is that Humans are far more complex and intriguing than merely "Homo Economicus Plus Biases." We develop intuitive theories, weave narratives, and make judgments rooted in our perceptions of right and wrong. A robust approach to Behavioral Economics should integrate these rich dimensions of human experience. Yet, these aspects are often absent in discussions about Behavioral Economics in favor of biases. But we can't just add error terms to the end of rational models and call it good psychology. If like me you are interested in what Behavioral Economics could look like if it moved beyond biases, I will recommend two papers by Samuel Johnson. Toward a Cognitive Science of Markets: Economic Agents as Sense-Makers Conviction Narrative Theory: A Theory of Choice Under Radical Uncertainty These papers represent, for me, what the future of Behavioral Economics could be if it took cognitive science more seriously.

Jared Peterson

Science, Strategy, and Training

Walter Mischel’s (of Marshmallow Experiment fame) controversial theory of personality (CAPS) has intrigued me for years. For Habit Weekly's new monthly journal club over on the slack I decided to dig in. Consider two claims: Jared is Agreeable Arizona is sunny Both of these are averages, traits, or tendencies. But they are not models. And by nature of not being models cannot forecast variability. i.e., You cannot forecast rain with a description as reductionist as “Arizona is sunny.” Yet, we can agree on some situations (context) where there will be variability. Arizona will not be sunny when a storm is passing through. Jared will not be agreeable if you disagree with him (just kidding). There are predictable deviations from the tendency. Or another way of saying it is that there is predictable variability between situations, and predictable consistency within situations. This is exactly what Mishel’s worked showed. For example, a child that is aggressive when warned might be below average on aggression when approached by a peer, and these situation specific behaviors are consistent over time. This context sensitivity IS personality according to Mischel, and trait-based approaches are fundamentally unable to capture it. Kahneman has called this a “scandal” because it showed the insufficiency of traits which must treat this predictable deviation as an error. But such variability is not an error, its a predictable part of personality. (I'm sure Kahneman saw a comparison to his own work) Rather than identifying all possible contexts that effect personality (impossible), Mischel’s Cognitive Affective System Theory of Personality (CAPS) instead focuses on how a person interprets (or construes) a situation. He and his co-author, Shoda, argue this "construal" transforms unfamiliar situations into familiar cues which then activate habits, and habits are then what really drive personality - both its average and its variation. It is a concise and plausible theory. I'm a fan. But there is an old saying, “all models are wrong, but some are useful,” and CAPS has a tragic flaw that is hard to get around: It is more right, but less useful. Modeling the process of how someone interprets and construes a situation is far more difficult than understanding average behavior with a survey, and so CAPS doesn't make behavior easier to predict. The comparison to Behavioral Science is hard to ignore. Humans tend to be loss averse, tend to be influenced by social norms, and tend to be impacted by choice architecture. But traits are fundamentally insufficient for predicting generalizability. Without a theory of how people interpret situations, our ability to predict generalizability is permanently shot. It is like trying to forecast when all you know is "Arizona is sunny." I plan on writing more about this in the future, but in the meantime check out references in the comments. #Personality #BehavioralScience #ChoiceArchitecture #Context #Psychology

Jared Peterson

Science, Strategy, and Training

1. What has helped you to be a better behavioral scientist 2. Isn't on the radar of most behavioral scientists My pick is "Games: Agency as Art" by Thi Nguyen At first glance, a book on philosophy of games doesn't seem related to behavioral science, but game designers are the pinnacle of their craft in designing for behavior. There is more to games than points and badges. Nguyen's core argument is that just as [paintings are the art of seeing] and [songs are the art of hearing] that [games are the art of agency]. That's vague, so let me make it concrete. Country songs crystallize the hominess of a small town, but Animal Crossing crystalizes the act of beautifying that town. Romance books crystallize the experience of love, but playing 'house' crystalizes the experience of adulthood for children all over the globe. Art crystallizes the experience of beauty, but chess crystalizes a mode of being: careful deliberation of moves and counter moves in the hopes of having a brilliant insight that brings everything to conclusion. In games we take a step outside of our normal way of thinking and acting, and out of our normal personality in order to experience a different way of being (or agency). And also, perhaps, to learn the *joy* and *beauty* of those alternative ways of being. Nguyen says when he started philosophy, he wasn't great at it because he didn't have the patience. It seemed tedious to think through all the nuances. We are all somewhere on the ADHD spectrum, and it perhaps seemed to him that he was too high on that spectrum to do the deep sort of engagement that philosophy required. But then he discovered Chess. Chess taught him how to enjoy a new mode of thought he had previously found tedious - the careful deliberation of moves and counter moves. Chess unlocked a new 'agency" for him that had previously been out of his grasp. A new facet of his personality he didn't know he was capable of. He learned to enjoy what he had previously found tedious I read this book while working on an e-commerce project where we had identified Choice Overload as a problem. But as I read Nguyen I started to ask myself how video game designers get away with the exhausting search for collectibles, or the tediousness of grinding for points. And it occurred to me that the problem wasn't the number of options on the site, but the mode of being (agency) we were forcing users into. If we ask users to engage in a methodical comparison of options, of course users will experience Choice Overload. But if we could find a way to crystallize the experience of exploration, then what had previously been tedious could instead be exciting. This insight eventually led to the article I wrote with Roos van Duijnhoven, MSc on Choice Overload. Nguyen's book is a treasure trove of insights for behavioral design, personality change, gamification, and the dangers of point systems. It's a book I wish would become part of the canon for behavioral scientists everywhere.

Jared Peterson

Science, Strategy, and Training

Tic Tac Toe is a solved game. That means that, if you know the right techniques, it is mathematically impossible to lose. This is opposed to Chess where we are unsure if there is a single optimal strategy. (edit: or as opposed to Poker where there cannot be a single optimal strategy because what is rational depends on a changing situation) The question I wish to ask is this; Is life a solved game? Is there a single reasoning strategy that, if you can apply it correctly, you will always come to the optimal and correct conclusion such that it is impossible to lose? I ask because I sometimes fear we Behavioral Scientists act like life is solved. In order to conclude that someone is exhibiting a bias, and therefore irrational, two things must be true. 1. The statistical or economic model which you are comparing the person to must actually give the correct answer if applied correctly (ie rationality is solved) 2. You as the researcher must know how to apply this strategy correctly The conclusion of these two premises is that you yourself are perfectly rational. You have solved life. Congratulations! Though you must forgive me for doubting. Yes, in some lab settings (which are often tic tac toe like in their simplicity) we can know the correct answer and so know someone is being irrational. But in the real world life is really messy, and it is unclear that economic and statistical reasoning is actually optimal. In fact, it's not hard to find settings where these models will lead you astray, such as in non-ergodic settings, or in domains with unknown unknowns. This is why I'm hesitant to talk about biases outside of very controlled lab settings. I am not confident in the models, nor in our ability to apply them correctly and so am constantly unsure whether the bias is actually there, and even if it is, whether that makes it irrational. We can show that a heuristic is present, but need to be more cautious about declaring that heuristic to be a bias. Sometimes heuristics give the same answer as a rational model, and sometimes heuristics are more optimal than a rational model. Conclusion: We have not solved life, and so should be more humble in our declarations of irrationality. Not because we can never be more rational, but because there should always be uncertainty that we understand enough to declare ourselves more rational. Especially when the subjects of our study are in the context and we merely observe it.

Jared Peterson

Science, Strategy, and Training

I wonder if it's better to think of Kahneman and Tversky's (K&T) work as a reaction to behaviorism than as a reaction to economics. Yes, the Cognitive Revolution happened before they did their most famous work, but psychology was still recovering from Behaviorism's hold. Behaviorism said that minds were black boxes and we shouldn't even try to understand what was going on inside them because it was impossible. K&T blew that belief out of the water. When these two compared human decision making to models of "rational" decision-making, they found humans systematically came to different answers. And it dawned on them - if they could find a rule that led to those deviations, then that rule was likely the same one that humans were using to reason. It was an impressive insight. They had found a way to infer what was happening inside the black box that is the mind. Not out of an interest in proving that humans were stupid, but as a methodology for studying cognition. (The idea of studying deviations from models was developed by others first, but they were the first to apply it in this particular way.) Thus the Heuristics and Biases paradigm was born. The approach was to identify systematic "biases" (deviations) from economic, statistical and similar models, and then hypothesize "heuristics" (rules) which could explain why humans came to a different answer than those models. As a methodology for uncovering principles of human cognition, I fully endorse the Heuristics and Biases paradigm. It's a great approach. I think the heuristics that have been identified are in large part correct descriptions of how humans actually reason. (Or at least how they reason in lab settings with stimuli they are unfamiliar with) (It's only once we start concluding things about rationality, or start overly generalizing the findings to non-lab settings, that I get uneasy)

Jared Peterson

Science, Strategy, and Training

Humans are not hopeless, helpless, or totally irrational. By understanding human potential, the wicked problems of human behavior can be transformed into actionable strategies. 🔗 This the central claim on my newly launched website: www.behaviorchange.expert My goal with this site is to clarify what I do, and to bridge connections not just with me, but also with two remarkable (though very different) organizations I’m associated with: 🌊 Nuance Behavior - An initiative I co-founded that's focused on digital behavior change. We tackle complex challenges by illuminating the underlying psychology, and developing strategies and frameworks for changing behavior across the entire user journey. 🧊 ShadowBox LLC - Where we research and train decision-making as well as intuitive and perceptual expertise. At ShadowBox, we focus on high stake domains characterized by uncertainty, time pressure and complexity such as medicine, law enforcement, child welfare, and many other domains. These may seem very different, but I continually find there is much overlap. The principles of expert decision-making and motivation are complementary and can help clarify the goals and strategies, and produce meaningful tactics, for any project that involves humans (i.e., all projects). 📢 If you are interested in research, strategy, or training, either within your company or in some target population, please schedule a call! We can help you uncover the underlying psychology, improve decision-making, and shift behavior in your project.

Jared Peterson

Science, Strategy, and Training

Every academic eventually reaches the Taco Bell stage of their career. Just like how everything on the menu at Taco Bell consists of 10 ingredients mixed and matched in different ways, academics will reach a point where they use the same 10 anecdotes and talking points for every presentation they give. Why? Because after decades of studying these principles in depth, they know the points inside and out and can wield them to answer any question or challenge thrown at them. Every question is an order for which they have the ingredients. For example, I don’t care what the question is that you ask Richard Thaler, I guarantee you he can answer it by talking about cashews, urinal flies, and organ donation. Knowledge of that depth doesn’t come from reading every book, listening to every podcast, and remembering every bias. It comes from engaging with a few ideas deeply. Outside of an academic context, this sort of depth is hard to get. So my recommendation to those outside of academia is this: find a group of peers and debate topics in your field. Since 2016 I have been on an online forum with friends where we talk and debate politics, philosophy, science, entertainment, family, and everything else under the sun. Having a group of intelligent friends with whom I can talk to about these topics has been one of the most important intellectual investments I have made outside of college. They force me to think and write, they build on my thoughts, introduce me to new ideas and thinkers, ask novel questions I never thought of, and encourage me to keep progressing. It is certainly more motivating than online school, and I would argue just about as useful. If you ask me to write a paper about a topic, it is likely one of my first moves will be to go to the forum and search for the times we have talked about it. There my thoughts are written down in detail, with links to other threads where we went over similar topics. Often there will even be counterarguments, and counter-counterarguments. It is as valuable as any note taking software, but with the added benefit of people who think differently. And because of this, there are hundred topics for which I have my 10 ingredients, hundreds of topics for which I have gotten to the Taco Bell Stage. There are of course a thousand other things you should be doing to keep up with your field. But I would argue finding a good group of peers should put this one near the top of your list. It brings depth, breadth, and motivation. It can be as powerful as a graduate program in driving your learning if for no other reason than having a good group of friends is incredibly motivating and keeps you on your toes. You will become a veritable Taco Bell of insights as you debate topics and introduce each other to new ideas. And if you keep it up long enough, maybe one day you’ll even be a Cheesecake Factory. If you do decide to pursue this idea, let me know. I may be interested in joining your group. :)

Jared Peterson

Science, Strategy, and Training

Is 'Nudge' ethical? I have two answers to this question The first is that I don't think it is a meaningful question because the word Nudge is not really all that meaningful. Any change to the context around a decision which might change behavior (eg clarifying options, forcing a choice, using bold font, adding reviews) could be considered a 'Nudge.' So to declare all Nudges unethical would be tantamount to saying that all presented choices are unethical. My second response is this: The US Supreme Court has determined that it is UNETHICAL for an organization to use freedom of choice as an excuse to not consider the well being of their employees, and so nudges must be ethical at least some of the time. In Hughes v. Northwestern University, Northwestern had offered 400+ retirement and savings plans, many of which were terrible, and had unreasonably high fees. Northwestern argued that employees are free to choose whatever they want, and they had no responsibility to help or influence the decisions of their employees. In a unanimous decision the Supreme Court disagreed, arguing that Northwestern had fiduciary obligations to their employees, and could not leave the entire burden of choice on their employees. If someone has a ethical criticism of a SPECIFIC nudge, I am all ears. I think Behavioral Scientists should always be questioning what is going on in the field. But when people criticize Nudge as a concept, I find it hard to treat the criticism seriously.

Jared Peterson

Science, Strategy, and Training

Last week, I defended the ethics of Nudge. Today, I feel it's my duty to highlight some things I consider unethical. Offender number 1: Microsoft Microsoft's persistent and overbearing push to get users to use OneDrive is over the top. Personally, I've only used OneDrive by accident. Occasionally, I lose files for weeks because Microsoft automatically saves them to OneDrive without me realizing it Offender number 2: Duolingo Duolingo showcases the difference between "Little e" and "Big E" behavior change ('e/E' stands for engagement). While Duolingo is effective at keeping users engaged for hours, it falls short in helping users achieve fluency in a language. A particularly emblematic feature is the ability to buy back a lost streak. It seems that Duolingo is often better at getting users to care about streaks than about the language they are learning, and Duolingo makes sure to capitalize on that. Offender number 3: Default Organ Donations Even Thaler and Sunstein, who popularized this famous case study, acknowledged that defaults in organ donation might be unethical. Follow-up studies show that defaults don't work well in this context anyway due to social and policy factors at hospitals. If it's unethical and ineffective, why does it remain a prominent example in the field? (Or perhaps it’s ethical because it doesn’t influence people anyway? 🤔) Takeaway: Nudge is a meaningless term because you can't not nudge, and so the ethics of behavior change should be a constant consideration in product and marketing design rather than something to consider only on special occasions. Organizations must choose defaults, advertise, make money, and present choices—all of which involve ethical considerations, regardless of whether these choices are made with the concept of "Nudge" in mind. We can't excuse our design choices through "revealed preference" or by claiming we're just designing for aesthetics (as if aesthetics doesn't 'nudge'). Every design choice has ethical implications, and so we must address these implications as just a regular part of the work-flow rather than treating the ethics of behavior change as a special case for "Nudges." Another consideration is that we often sell ourselves on the value of the thing we are trying to accomplish, which often makes us blind to the ethics of what we are doing. This is where I think Behavioral Scientists are most likely to get into hot water. It is always important to take into account how our ethics and values differ from others. You will never please everyone, but if you find yourself recommending design choices that would result in scorn from someone, it is worth thinking through whether that scorn might have any merit. hashtag#Ethics hashtag#BehaviorChange hashtag#Nudge hashtag#ProductDesign hashtag#Marketing hashtag#Microsoft hashtag#Duolingo hashtag#OrganDonation hashtag#DesignChoices

Say hello

Interested in changing behavior or improving decision making? Lets talk.

Say Hello

Say Hello

Copyright © 2024 Jared Peterson. All rights reserved.

Copyright © 2024 Jared Peterson. All rights reserved.