Thursday, March 2, 2023

Are Matching Estimators and the Conditional Independence Assumption Inconsistent with Rational Decision Making

 Scott Cunningham brings up some interesting points about matching and utility maximization in this substack post: https://causalinf.substack.com/p/why-do-economists-so-dislike-conditional 

"Because most of the time, when you are fully committed to the notion that people are rational, or at least intentionally pursuing goals and living in the reality of scarcity itself, you actually think they are paying attention to those potential outcomes. Why? Because those potential outcomes represent the gains from the choice you’re making....if you think people make choices because they hope the choice will improve their life, then you believe their choices are directly dependent on Y0 and Y1. This is called “selection on treatment gains”, and it’s a tragic problem that if true almost certainly means covariate adjustment won’t work....Put differently, conditional independence essentially says that for a group of people with the same covariate values, their decision making had become erratic and random. In other words, the covariates contained the rationality and you had found the covariates that sucked that rationality out of their minds."

This makes me want to ask - is there a way I can specify utility functions or think about utility maximization that is consistent with the CIA in a matching scenario? This gets me into very dangerous territory because my background is applied economics, not theory. I think most of the time when matching is being used in observational settings, people aren't thinking about utility functions and consumer preferences and how they relate to potential outcomes. Especially non-economists. 

Thinking About Random Utility Models

The discussion above for some reason motivated me to think about random utility models (RUMs). Not being a theory person and not having worked with RUMs hardly at all, I'm being even more dangerous but hear me out, this is just a thought experiment. 

I first heard of RUMs years ago when working in market research and building models focused on student enrollment decisions. From what I understand they are an important work horse in discrete choice modeling applications. Food economist Jayson Lusk has even looked at RUMs and their predictive validity via functional magnetic resonance imaging (see Neural Antecedents of a Random Utility Model).

The equation below represents the basic components of a random utility model:

U = V + e

where = systemic utility and 'e' represents random utility. 

Consumers choose the option that provides the greatest utility. The systemic component 'V' captures attributes describing the alternative choices or perceptions about the choices, and characteristics of the decision maker.  In the cases where matching methods are used in observational settings, the relevant choice is often whether or not to participate in a program or take treatment.

This seems to speak to one of the challenges raised in Scott's post (keep in mind Scott never mentions RUMS, all this about RUMS are my meandering so if a discussion about RUMs is non-sensical its on me not him): 

"The known part requires a model, be it formal or informal in nature, and the quantified means it’s measured and in your dataset. So if you have the known and quantified confounder, then a whole host of solutions avail themselves to you like regression, matching, propensity scores, etc....There’s a group of economists who object to this statement, and usually it’s that “known” part."

What seems appealing to me is that RUMs appear to allow us to make use of what we think we can know about utility via 'V' and still admit that there is a lot we don't know, captured by 'e' in a random utility model. In this formulation 'e' still represents rationality, it's just unobservable heterogeneity in rational preferences that we can't observe. This is assumed to be random. Many economists working in discrete choice modeling contexts are apparently comfortable with the 'known' part of a RUM at least from the way I understand this.

A Thought Experiment: A Random Utility Model for Treatment Participation

Again - proceeding cautiously here, suppose that in an observational setting the decision to engage in a program or treatment designed to improve outcome Y is driven by systematic and random components in a RUM:

U = V(x) + e

and the decision to participate is based on as Scott describes the potential outcomes Y1 and Y0 which represent the gains from choosing. 

delta = (Y1 - Y0) where you get Y1 for choosing D=1 and Y0 for D=0

In the RUM you choose D = 1 if U(D = 1) > U(D = 0) 

D = f(delta) = f(Y1,Y0)= f(x)

and we specify the RUM as U(D) = V(x) + e

where x represents all the observable things that might contribute to an individual's utility (perceptions about the choices, and characteristics of the decision maker) in relation to making this decision. 

So the way I wanted to think about this is when we are matching, the factors we match/control for would be the observable variables 'x' that contribute to systemic utility V(x), while many of the unobservable aspects reflect heterogeneous preferences across individuals that we can't observe. This would contribute to the random component of the RUM. 

So in essence YES, if we think about this in the context of a RUM, the covariates contain all of the rationality (at least the observable parts) and what is unobserved can be modeled as random. We've harmonized utility maximization, matching and the CIA! 

Meeting the Assumptions of Random Utility and the CIA

But wait...not so fast. In the observational studies where matching is deployed, I am not sure we can assume the unobserved heterogeneous preferences represented by 'e' will be random across the groups we are comparing.  Those who choose D =1 will have obvious differences in preferences than those who choose D = 0. There will be important differences between treatment and control groups' preferences not accounted for by covariates in the systemic component V(x) and those unobserved preferences in 'e' will be dependent on potential outcomes Y0 and Y1 just like Scott was saying. I don't think we can assume in an observational setting with treatment selection that the random component of the RUM is really random with regard to the choice of taking treatment if the choice is driven by expected potential outcomes. 

Some Final Questions

If 'x' captures everything relevant to an individual's assessment of their potential outcomes Y1 and Y0 (and we have all the data for 'x' which itself is a questionable assumption) then could we claim that everything else captured by the term 'e' is due to random noise - maybe pattern noise or occasion noise

In an observational setting where we are modeling treatment choice D, can we break 'e' down further into components like below?

e = e1 + e2

where e1 is unobservable heterogeneity in rational preferences driven by potential outcomes Y1 & Y0 making it non random and e2 represents noise that is more random like pattern or occasion noise and likely to be independent of Y1 & Y0. 

IF the answer to the questions above is YES and we can decompose the random component of RUMS this way and e2 makes up the largest component of e (i.e  e1 is small, non-existent, or insignificant),  then maybe a RUM is a valid way to think about modeling the decision to choose treatment D and we can match on the attributes of systemic utility 'x' and appeal to the CIA (if my understanding is correct).

But the less we actually know about x and what is driving the decision as it relates to potential outcomes Y0 and Y1, the larger e1 becomes and then the random component of a RUM may no longer be random. 

If my understanding above is correct, then the things we likely would have to assume for a RUM to be valid turn out to be similar to if not exactly the things we need for the CIA to hold. 

The possibility of meeting the assumptions of a RUM or the CIA would seem unlikely in observational settings if (1) we don't know a lot about systemic utility and 'x' and (2) the random component  e turns out not to be random. 

Conclusion

So much for an applied guy trying to do theory to support the possibility of the CIA holding in matched analysis.  I should say I am not an evangelist for matching but trying to be more of a realist about its uses and validity.  Scott's post introduces a very interesting way to think about matching and the CIA and the challenges we might have meeting the conditions for it. 


Thursday, January 26, 2023

What is new and different about difference-in-differences?

Back in 2012 I wrote about the basic 2 x 2 difference in difference analysis (two groups, two time periods). Columbia public health probably has a better introduction. 

The most famous example of an analysis that motivates a 2 x 2 DID analysis is John Snow's 1855 analysis of the cholera epidemic in London:




(Image Source)

 I have since written about some of the challenges of estimating DID with glm models (see here, here, and here.), as well as combining DID with matching, and problems to watch out for when combining methods. But a lot of what we know about difference in differences has changed in the last decade. I'll try to give a brief summary based on my understanding and point towards some references that do a better job presenting the current state.

The Two-Way Fixed Effects model (TWFE)

The first thing I should discuss is extending the 2x2 model to include multiple treated groups and/or multiple time periods. The generalized model for DiD also referred to as the two-way fixed effects (TWFE) model is the best way to represent those kind of scenarios:.

Ygt = a+ b+ δDgt + εt

a= group fixed effects

b= time fixed effects

Dgt= treatment*post period (interaction term)

δ = ATT or DID estimate

Getting the correct standard errors for DID models that involve many repeated measures over time and/or where treatment and control groups are defined by multiple geographies presents two challenges compared to the basic 2x2 model. Serial correlation and correlation within groups. There are several approaches that can be considered depending on your situation.

1 - Block bootstrapping

2 - Aggregating data into single pre and post periods

3 - Clustering standard errors at the group level

Clustering at the group level should provide the appropriate standard errors in these situations when the number of clusters are large.

For more details on TWFE models, both Scott Cunningham and Nick Huntington-Klein have great econometrics textbooks with chapters devoted to these topics. See the references below for more info.

Differential Timing and Staggered Rollouts

But things can get even more complicated with DID designs. Think about situations where there are different groups getting treated at different times over a number of time periods. This is not just a thought experiment trying to imagine the most difficult study design and pondering for the sake of pondering – these kind of staggered rollouts are very common in business and policy settings.  Imagine policy rules adopted by different states over time (like changes in minimum wages) or imagine testing a new product or service by rolling it out to different markets over time. Understanding how to evaluate their impact is important. For a while it seemed economists may have been a little guilty of handwaving with the TWFE model assuming the estimated treatment coefficient was giving them the effect they wanted. 

But Andrew Goodman-Bacon refused to take this interpretation at face value and broke this down for us determining that the TWFE estimator was trying to give us a weighted average of all potential 2x2 DID estimates you could make with the data. That actually sounds intuitive and helpful. But what he discovered that is not so intuitive is that some of those 2x2 comparisons could be comparing previously treated groups with current treated groups. That's not a comparison we generally are interested in making, but it gets averaged in with the others and can drastically bias the results particularly when there is treatment effect heterogeneity (the treatment effect is different across groups and trending over time). 

So how do you get a better DID estimate in this situation? I'll spare you the details (because I'm still wrestling with them) but the answer seems to be the estimation strategy developed by Callaway and Sant'Anna. The documentation in R for their package walks through a lot of the details and challenges with TWFE models with differential timing. 

Additionally this video of Andrew Goodman-Bacon was really helpful for understanding the 'Bacon' decomposition of TWFE models and the problems above.


After watching Goodman-Bacon, I recommend this talk from Sant'Anna discussing their estimator. 

Below Nick Huntington-Klein provides a great summary of the issues made apparent by the Bacon decomposition made above and the Callaway and Sant'Anna method for staggered/rollout DID designs. he also gets into the Wooldridge Mundlack approach:

A Note About Event Studies

In a number of references I have tried to read to understand this issue, the term 'event study' is thrown around and it seems like every time it is used it is used differently but the author/speaker assumes we are all taking about the same thing. In this video Nick Huntington-Klein introduces event studies in a way that is the most clear and consistent. Watching this video might help.

References: 

Causal Inference: The Mixtape. Scott Cunningham. https://mixtape.scunning.com/ 

The Effect: Nick Huntington-Klein. https://theeffectbook.net/

Andrew Goodman-Bacon. Difference-in-differences with variation in treatment timing. Journal of Econometrics.Volume 225, Issue 2, 2021.

Brantly Callaway, Pedro H.C. Sant’Anna. Difference-in-Differences with multiple time periods. Journal of Econometrics. Volume 225, Issue 2, 2021,

Related Posts:

Modeling Claims Costs with Difference in Differences. https://econometricsense.blogspot.com/2019/01/modeling-claims-with-linear-vs-non.html 

Was It Meant to Be? OR Sometimes Playing Match Maker Can Be a Bad Idea: Matching with Difference-in-Differences. https://econometricsense.blogspot.com/2019/02/was-it-meant-to-be-or-sometimes-playing.html 


Saturday, October 29, 2022

The Value of Experimentation and Causal Inference in Complex Business Environments

Introduction




Summary: Causality in business means understanding how to connect the things we do with the value we create. A cause is something that makes a difference (Dave Lewis, Journal of Philosophy, 1973). If we are interested in what makes a difference in creating business value (what makes a difference in moving the truck above), we care about causality. Causal inference in business helps us create value by providing knowledge about what makes a difference so we can move resources from a lower valued use (having folks on the back of the truck) to a higher valued use (putting folks behind the truck). 


We might hear the phrase correlation is not causation so often that it could easily be dismissed as a cliche, as opposed to a powerful mantra for improving knowledge and decision making. These distinctions have an important meaning in business and applied settings. We could think of businesses as collections of decisions and processes that move and transform resources. Business value is created by moving resources from lower to higher valued uses.  Knowledge is the most important resource in a firm and the essence of organizational capability, innovation, value creation, and competitive advantage. Causal knowledge is no exception. Part 1 of this series discusses the knowledge problem and decisions.

In business talk can be cheap. With lots of data anyone can tell a story to support any decision they want to make. But good decision science requires more than just having data and a good story, it's about having evidence to support decisions so we can learn faster and fail smarter.  In the diagram above this means being able to identify a resource allocation that helps us push the truck forward (getting people behind the truck). Confusing correlation with causation might lead us to believe value is a matter of changing shirt colors vs. moving people. We don't want to be weeks, months, or years down the road only to realize that other things are driving outcomes, not the thing we've been investing in. By that time, our competition is too far ahead for us to ever to catch up and it may be too late for us to make up for the losses of misspent resources. This is why in business, we want to invest in causes, not correlations. We are ultimately going to learn either way, the question is about if we'd rather do it faster and methodically, or slower and precariously. 

How does this work? You might look at the diagram above and tell yourself - it's common sense where you need to stand to push the truck to move it forward - I don't need any complicated analysis or complex theories to tell me that. That's true for a simple scenario like that and likely so for many day to day operational decisions. Sometimes common sense or subject matter expertise can provide us with sufficient causal knowledge to know what actions to take. But when it comes to informing the tactical implementation of strategy (discussed in part 3 of this series) we can't always make that assumption. In complex business environments with high causal density (where the number of things influencing outcomes is numerous), we usually don't know enough about the nature and causes of human behavior, decisions, and causal paths from actions to outcomes to account for them well enough to know - what should I do?  What creates value? In complicated business environments intuition alone may not be enough - as I discuss in part 2 of this series we can be easily fooled by our own biases and biases in the data and the many stories that it could tell. 

From his experience with Microsoft, Ron Kohavi shares, up to 2/3 of the ideas we might test in a business environment turn out to either have flat results or harm the metric we are trying to improve. In Noise: A Flaw in Human Judgement authors share how often experts disagree with each other and even themselves at different times because of biases in judgement and decision making. As Stephen Wendel says you can't just wing it with bar charts and graphs when you need to know what makes a difference. 

In application, experimentation and casual inference represents a way of thinking that requires careful consideration of the business problem and all the ways that our data can fool us; separating signal from noise (statistical inference) and making the connection between actions and outcomes (causal inference). Experimentation and causal inference leverages good decision science that brings together theory and subject matter expertise with data so we can make better informed business decisions in the face of our own biases and the biases in data. In the series of posts that follow, I overview in more detail the ways that experimentation and causal inference help us do these things in complex business environments. 

The Value of Experimentation and Causal Inference in Complex Business Environments:

Part 1: The Knowledge Problem

Part 2: Behavioral Biases

Part 3: Strategy and Tactics


Thursday, November 4, 2021

Causal Decision Making with non-Causal Models

In a previous post I noted: 

" ...correlations or 'flags' from big data might not 'identify' causal effects, but they are useful for prediction and might point us in directions where we can more rigorously investigate causal relationships"

Recently on LinkedIn I discussed situations where we have to be careful about taking action on specific features in a correlational model, for instance changing product attributes or designing an intervention based on interpretations of SHAP values from non-causal predictive models. I quoted Scott Lundberg:

"regularized machine learning models like XGBoost will tend to build the most parsimonious models that predict the best with the fewest features necessary (which is often something we strive for). This property often leads them to select features that are surrogates for multiple causal drivers which is "very useful for generating robust predictions...but not good for understanding which features we should manipulate to increase retention."

So sometimes, we may go into a project with the intention of only needing predictions. We might just want to target offers or nudges to customers or product users but not think about this in causal terms at first. But, as I have discussed before the conversation often inevitably turns to causality, even if stakeholders and business users don't use causal language to describe their problems. 

"Once armed with predictions, businesses will start to ask questions about 'why'... they will want to know what decisions or factors are moving the needle on revenue or customer satisfaction and engagement or improved efficiencies...There is a significant difference between understanding what drivers correlate with or 'predict' the outcome of interest and what is actually driving the outcome."

This would seem to call for causal models. However, in their recent paper Carlos Fernández-Loría and Foster Provost make an exciting claim:

“what might traditionally be considered “good” estimates of causal effects are not necessary to make good causal decisions…implications above are quite important in practice, because acquiring data to estimate causal effects accurately is often complicated and expensive. Empirically, we see that results can be considerably better when modeling intervention decisions rather than causal effects.”

Now in this case they are not talking about causal models related to identifying key drivers of an outcome, so it is not contradicting anything mentioned above or in previous posts. Particularly they are talking about building models for causal decision making (CDM) that are simply focused on making decisions about who to 'treat' or target.  In this particular scenario businesses are leveraging predictive models to target offers, provide incentives, or make recommendations. As discussed in the paper, there are two broad ways of approaching this problem. Let's say the problem is related to churn.

1) We could predict risk of churn and target members most likely to churn. We could do this with a purely correlational machine learning model. The output or estimand from this model is a predicted probability p() or risk score. They also refer to these kinds of models as 'outcome' models

2) We could build a causal model, that predicts causal impact of an outreach. This would allow us to target customers that we can most likely 'save' as a result of our intervention. They refer to this estimand as a causal effect estimate CEE. Building machine learning models that are causal can be more challenging and resource intensive.

It is true at the end of the day we want to maximize our impact. But the causal decision is ultimately who do we target in order to maximize impact. They point out this causal decision does not necessarily hinge on how accurate our point estimate is related to causal impact as long as errors in prediction still lead to the same decisions about who to target.

What they find is that in order to make good causal decisions about who to 'treat' we don't have to have super accurate estimates of the causal impact of treatment (or models focused on CEE). In fact they talk through scenarios and conditions where outcome models like #1 above that are non-causal, can perform just as well or sometimes better than more accurate causal models focusing on CEE. 

In other words, correlational outcome models (like #1) can essentially serve as proxies for the more complicated causal models (like #2), even if the data used to estimate these 'proxy' models is confounded.

 Scenarios where this is most likely include:

1) Outcomes used as proxies and (causal)effects are correlated

2) Outcomes used as proxies are easier to estimate than causal effects

3) Predictions are used to rank individuals

They also give some reasons why this may be true. Biased non-causal models built on confounded data may not be able to identify true causal effects, but still be useful for identifying the optimal decision. 

"This could occur when confounding is stronger for individuals with large effects - for example if confounding bias is stronger for 'likely' buyers, but the effect of adds is also stronger for them...the key insight here is that optimizing to make the correct decision generally involves understanding whether a causal effect is above or below a given threshold, which is different from optimizing to reduce the magnitude of bias in a causal effect estimate."

"Models trained with confounded data may lead to decisions that are as good (or better) than the decisions made with models trained with costly experimental data, in particular when larger causal effects are more likely to be overestimated or when variance reduction benefits of more and cheaper data outweigh the detrimental effect of confounding....issues that make it impossible to estimate causal effects accurately do not necessarily keep us from using the data to make accurate intervention decisions."

Their arguments hinge on the idea that what we are really solving for in these decisions is based on ranking:

"Assuming...the selection mechanism producing the confounding is a function of the causal effect - so that the larger the causal effect the stronger the selection-then (intuitively) the ranking of the preferred treatment alternatives should be preserved in the confounded setting, allowing for optimal treatment assignment policies from data."

A lot of this really comes down to proper problem framing and appealing to the popular paraphrasing of George E. P. Box - all models are wrong, but some are useful. It turns out in this particular use case non-causal models can be as useful or more useful than causal ones.

And we do need to be careful about the nuance of the problem framing. As the authors point out, this solves one particular business problem and use case, but does not answer some of the most important causal questions businesses may be interested in:

"This does not imply that firms should stop investing in randomized experiments or that causal effect estimation is not relevant for decision making. The argument here is that causal effect estimation is not necessary for doing effective treatment assignment."

They go on to argue that randomized tests and other causal methods are still core to understanding the effectiveness of interventions and strategies for improving effectiveness. Their use case begins and ends with what is just one step in the entire lifecycle of product development, deployment, and optimization. In their discussion of further work they suggest that:

"Decision makers could focus on running randomized experiments in parts of the feature space where confounding is particularly hurtful for decision making, resulting in higher returns on their experimentation budget."

This essentially parallels my previous discussion related to SHAP values. For a great reference for making practical business decisions about when this is worth the effort see the HBR article in the references discussing when to act on a correlation.

So some big takeaways are:

1) When building a model for purposes of causal decision making (CDM) even a biased model (non-causal) can perform as well or better than a causal model focused on CEE.

2) In many cases, even a predictive model that provides predicted probabilities or risk (as proxies for causal impact or CEE) can perform as well or better than causal models when the goal is CDM.

3) If the goal is to take action based on important features (i.e. SHAP values as discussed before) however, we still need to apply a causal framework and understanding the actual effectiveness of interventions may still require randomized tests or other methods of causal inference.


References: 

Causal Decision Making and Causal Effect Estimation Are Not the Same... and Why It Matters. Carlos Fernández-Loría and Foster Provost. 2021. https://arxiv.org/abs/2104.04103

When to Act on a Correlation, and When Not To. David Ritter. Harvard Business Review. March 19, 2014. 

Be Careful When Interpreting Predictive Models in Search of Causal Insights. Scott Lundberg. https://towardsdatascience.com/be-careful-when-interpreting-predictive-models-in-search-of-causal-insights-e68626e664b6  

Additional Reading:

Laura B Balzer, Maya L Petersen, Invited Commentary: Machine Learning in Causal Inference—How Do I Love Thee? Let Me Count the Ways, American Journal of Epidemiology, Volume 190, Issue 8, August 2021, Pages 1483–1487, https://doi.org/10.1093/aje/kwab048

Petersen, M. L., & van der Laan, M. J. (2014). Causal models and learning from data: integrating causal modeling and statistical estimation. Epidemiology (Cambridge, Mass.), 25(3), 418–426. https://doi.org/10.1097/EDE.0000000000000078

Explaining the Behavior of Black-Box Prediction Algorithms with Causal Learning. Numair Sani, Daniel Malinsky, Ilya Shpitser arXiv:2006.02482v3  

Related Posts:

Will there be a credibility revolution in data science and AI? 

Statistics is a Way of Thinking, Not a Toolbox

Big Data: Don't Throw the Baby Out with the Bathwater

Big Data: Causality and Local Expertise Are Key in Agronomic Applications 

The Use of Knowledge in a Big Data Society

Wednesday, July 7, 2021

R.A Fisher, Big Data, and Pretended Knowledge

In Thinking Fast and Slow, Kahneman points out that what matters more than the quality of evidence, is the coherence of the story. In business and medicine, he notes that this kind of 'pretended' knowledge  based on coherence is often sought and preferred. We all know that no matter how great the analysis, if we can't explain and communicate the results with influence, our findings may go unappreciated.  But as we have learned from misinformation and disinformation about everything from vaccines to GMOs, Kahneman's insight is a double edge sword that cuts both ways. Coherent stories often win out over solid evidence and lead to making the wrong decision. We see this not only in science and politics, but also in business. 

In the book The Lady Tasting Tea by David Salsburg, we learn that R.A. Fisher was all too familiar with the pitfalls of attempting to innovate based on pretended knowledge and big data (excerpts):

"The Rothamsted Agricultural Experiment Station, where Fisher worked during the early years of the 20th century, had been experimenting with different fertilizer components for almost 90 years before he arrived...for 90 years the station ran experiments testing different combinations of mineral salts and different strains of wheat, rye, barley, and potatoes. This had created a huge storehouse of data, exact daily records of rainfall and temperature, weekly records of fertilizer dressings and measures of soil, and annual records of harvests - all of it preserved in leather bound notebooks. Most of the 'experiments' had not produced consistent results, but the notebooks had been carefully stored away in the stations archives....the result of these 90 years of 'experimentation' was a mess of confusion and vast troves of unpublished and useless data...the most that could be said of these [experiments] was that some of them worked sometimes, perhaps, or maybe."

Fisher introduced the world to experimental design and challenged the idea that scientists could make progress by tinkering alone. Instead, he motivated them to think through inferential questions- Is the difference in yield for variety A vs variety B (signal) due to superior genetics, or is this difference what we would expect to see anyway due to natural variation in crop yields (noise) i.e. in other words is the differences in yield statistically significant? This is the original intention of the concept of statistical significance that has gotten lost in the many abuses and misinterpretations often hear about. He also taught us to ask questions about causality, does variety A actually yield better than variety B because it is genetically superior, or could differences in yield be explained by differences in soil characteristics, weather/rainfall, planting date, or numerous other factors. His methods taught us how to separate the impact of a product or innovation from the impact and influences of other factors.

Fisher did more than provide a set of tools for problem solving. He introduced a structured way of thinking about real world problems and the data we have to solve them.  This way of thinking moved the agronomists at Rothamsted away from from mere observation to useful information. This applies not only to agriculture, but to all of the applied and social sciences as well as business.

In his book Uncontrolled, Jim Manzi stressed the importance of thinking like Fisher's plant breeders and agronomists (Fisher himself was a geneticist) especially in business settings. Manzi describes the concept of 'high causal density' which is the idea that often the number of causes of variation in outcomes can be enormous with each having the potential to wash out the cause we are most interested in (whatever treatment or intervention we are studying). In business, which is a social science, this becomes more challenging than in the physical and life sciences. In physics and biology we can assume relatively uniform physical and biological laws that hold across space and time. But in business the 'long chain of causation between action and outcome' is 'highly dependent for its effects on the social context in which it is executed.' It's all another way of saying that what happens in the outside world can often have a much larger impact on our outcome than a specific business decision, product, or intervention. As a result this calls for the same approach Fisher advocated in agriculture to be applied in business settings. 

List and Gneezy address this in The Why Axis:

"Many businesses experiment and often...businesses always tinker...and try new things...the problem is that businesses rarely conduct experiments that allow a comparison between a treatment and control group...Business experiments are research investigations that give companies the opportunity to get fast and accurate data regarding important decisions."

Fisher's approach soon caught on and revolutionized science and medicine, but in many cases is still lagging adoption in some business settings in the wake of big data, AI, and advances in machine learning. As Jim Manzi and Stefan Thomke state in Harvard Business Review in the absence of formal randomized testing and good experimental design:

"executives end up misinterpreting statistical noise as causation—and end up making bad decisions"

In The Book of Why, Judea Pearl laments the reluctance to embrace causality: 

"statistics, including many disciplines that looked to it for guidance remained in the prohibition era, falsely believing that the answers to all scientific questions reside in the data, to be unveiled through clever data mining tricks...much of this data centric history still haunts us today. We live in an era that presumes Big Data to be the solution to all of our problems. Courses in data science are proliferating in our universities, and jobs for data scientists are lucrative in companies that participate in the data economy. But I hope with this book to convince you that data are profoundly dumb...over and over again, in science and business we see situations where more data aren't enough. Most big data enthusiasts, while somewhat aware of those limitations, continue to chase after data centric intelligence."

These big data enthusiasts strike a strong resemblance to the researchers at Rothamsted before Fisher. List has a similar take:

"Big data is important, but it also suffers from big problems. The underlying approach relies heavily on correlations, not causality. As David Brooks has noted, 'A zillion things can correlate with each other depending on how you structure of the data and what you compare....because our work focuses on field experiments to infer causal relationships, and because we think hard about these causal relationships of interest before generating the data we go well beyond what big data could ever deliver."

We often want fast iterations and actionable insights from data. While it is true, a great analysis with no story delivered too late is as good as no analysis, it is just as true that quick insights with a coherent story based on pretended knowledge from big data can leave you running in circles getting nowhere - no matter how fast you might feel like you are running. In the case of Rothamsted, scientists ran in circles for 90 years before real insights could be uncovered using Fisher's more careful and thoughtful analysis. Even if they had today's modern tools of AI and ML and data visualization tools to cut the data 1000 different ways they still would not have been able to get any value for all of their effort. Wow, 90 years! How is that for time to insight? In many ways, despite drowning in data and the advances in AI and machine learning, many areas of business across a number of industries will find themselves in the same place Fisher found himself at Rothamsted almost 100 years ago. We will need a credibility revolution in AI to bring about the kind of culture change that will make the kind of causal and inferential thinking that comes natural to today's agronomists (thanks to Fisher) or more recently the way Pearl's disciples think about causal graphs more commonplace in business strategy. 

Notes: 

1) Randomized tests are not always the only way to make causal inferences. In fact in the Book of Why Pearl notes in relation to smoking and lung cancer, outside of the context of randomized controlled trials "millions of lives were lost or shortened because scientists did not have adequate language or methodology for answering causal questions." The credibility revolution in epidemiology and economics, along with Pearl's work has provided us with this language. As Pearl notes: "Nowadays, thanks to carefully crafted causal models, contemporary scientists can address problems that would have once been considered unsolvable or beyond the pale of scientific inquiry." See also: The Credibility Revolution(s) in Econometrics and Epidemiology.

2) Deaton and Cartwright make strong arguments challenging the supremacy of randomized tests as the gold standard for causality (similar to Pearl) but this only furthers the importance of considering careful causal questions in business and science by broadening the toolset along the same lines as Pearl. Deaton and Cartwright also emphasize the importance of interpreting causal evidence in the context of sound theory. See: Angus Deaton, Nancy Cartwright,Understanding and misunderstanding randomized controlled trials,Social Science & Medicine, Volume 210, 2018.

3) None of this is to say that predictive modeling and machine learning cannot answer questions and solve problems that create great value to business. The explosion of the field of data science is an obvious testament to this fact. Probably the most important thing in this regard is for data scientists and data science managers to become familiar with the important distinctions between models and approaches that explain or predict. See also: To Explain or Predict and Big Data: Don't Throw the Baby Out with the Bathwater

Additional Reading

Will there be a credibility revolution in data science and AI? 

https://econometricsense.blogspot.com/2018/03/will-there-be-credibility-revolution-in.html 

Statistics is a Way of Thinking, Not a Toolbox

https://econometricsense.blogspot.com/2020/04/statistics-is-way-of-thinking-not-just.html 

The Value of Business Experiments and the Knowledge Problem

https://econometricsense.blogspot.com/2020/04/the-value-of-business-experiments-and.html

The Value of Business Experiments Part 2: A Behavioral Economic Perspective

http://econometricsense.blogspot.com/2020/04/the-value-of-business-experiments-part.html 

The Value of Business Experiments Part 3: Innovation, Strategy, and Alignment 

http://econometricsense.blogspot.com/2020/05/the-value-of-business-experiments-part.html 

Big Data: Don't Throw the Baby Out with the Bathwater

http://econometricsense.blogspot.com/2014/05/big-data-dont-throw-baby-out-with.html 

Big Data: Causality and Local Expertise Are Key in Agronomic Applications

http://econometricsense.blogspot.com/2014/05/big-data-think-global-act-local-when-it.html

The Use of Knowledge in a Big Data Society

https://www.linkedin.com/pulse/use-knowledge-big-data-society-matt-bogard/ 



Wednesday, June 2, 2021

Science Communication for Business and Non-Technical Audiences: Stigmas, Strategies, and Tactics

If you are a reader of this blog you are familiar with the number of posts I have shared about machine learning and causal inference and the benefits of education in economics. I have also discussed how there are important gaps sometimes between theory and application. 

In this post I am going to talk about another important gap related to communication. How do we communicate the value of our work to a non-technical audience? 

We can learn a lot from formal coursework, especially in good applied programs with great professors. But if not careful we can pick up on mental models and habits of thinking that turn out to weigh us down too, particularly for those that end up working in very applied business or policy settings. How we deal with these issues becomes important to career professionals and critical to those involved in science communication in general whether we are trying to influence business decision makers, policy makers, or consumers and voters.

In this post I want to discuss communicating with intent, paradigm gaps, social harassment costs, and mental accounting.

As stated in The Analytics Lifecycle Toolkit: "no longer is it sufficient to give the technical answer, we must be able to communicate for both influence and change."

Communicating to Business and Non-Technical Audiences - or - The Laffer Curve for Science Communication

For those who plan to translate their science backgrounds to business audiences (like many data scientists coming from scientific backgrounds) what are some strategies for becoming better science communicators?  In their book Championing Science: Communicating your Ideas to Decision Makers Roger and Amy Aines offer lots of advice. You can listen to a discussion of some of this at the BioReport podcast here. 

Two important themes they discuss is the idea of paradigm gaps and intent. Scientists can be extremely efficient communicators through the lens of the paradigms they work in. 

As discussed in the podcast, a paradigm is all the knowledge a scientist or economist may have in their head specific to their field of study and research. Unfortunately there is a huge gap between this paradigm and its vocabulary and what non-technical stakeholders can relate to. They have to meet stakeholders where they are, vs. the audience they may find at conferences or research seminars. From experience, different stakeholders and audiences across different industries have different gaps. If you work for a consultancy with external pharma clients they might have a different expectation about statistical rigor than say a product manager in a retail setting. Even within the same business or organization, the tactics used in solving for the gap for one set of stakeholders might not work at all for a new set of stakeholders if you change departments. In other words, know your audience. What do they want or need or expect? What are their biases? What is their level of analytic or scientific literacy? How risk averse are they? Answers to these questions is a great place to start in terms of filling the paradigm gaps and to address the second point made in the podcast - speaking with intent.

As discussed in the podcast: "many scientists don't approach conversations or presentations with a real strategic intent in terms of what they are communicating...they don't think in terms of having a message....they need to elevate and think about the point they are trying to make when speaking to decision makers." 

As Bryan Caplan states in his book The Myth of the Rational Voter, when it comes to speaking to non-economists and the general public, they should apply the Laffer curve of learning, "they will retain less if you try to teach them more."

He goes on to discus that its not just what we say, but how we position it, especially when dealing with resistance related to misinformation and disinformation and systemic biases:

"irrationality is not a barrier to persuasion, but an invitation to alternative rhetorical techniques...if beliefs are in part consumed for their direct psychological benefits then to compete in the marketplace of ideas, you need to bundle them with the right emotional content."

In the Science Facts and Fallacies podcast (May 19, 2021) Kevin Folta and Cameron English discuss:

"We spend so much time trying to convince people with scientific principles....it's so important for us to remember what we learn from psychology and sociology (and economics) matters. These are turning out to be the most important sciences in terms of forming a conduit through which good science communication can flow."

Torsten Slok offers great advice in his discussion with Barry Ritholtz about working in the private sector as a PhD economist in the Masters in Business Podcast back in 2018: 

"there is a different sense of urgency and an emphasis on brevity....we offer a service of having a view on what the economy will do what the markets will do - lots of competition for attention...if you write long winded explanations that say that there is a 50/50 chance that something will happen many customers will not find that very helpful."

So there are a lot of great data science and science communicators out there with great advice. A big problem is this advice is often not part of the training that many of those with scientific or technical backgrounds receive, and an even bigger problem is that it is often looked down upon and even punished! I'll explain more below.

The Negative Stigma of Science Communication in the Data Science and Scientific Community

One of the most egregious things I see on social media is someone trying their best to help mentor those new to the analytical space (and improve their own communication skills) by sharing some post that attempts to describe some complicated statistical concept in 'layman's' terms - to only be rewarded by harassing and trolling comments. Usually this is about how they didn't capture every particular nuance of the theory, failed to include a statement about certain critical assumptions, or over simplified the complex thing they were trying to explain in simple terms to begin with. This kind of negative social harassment seems to be par for the course when attempting to communicate statistics and data science on social media like LinkedIn and Twitter.

Similarly in science communication, academics can be shunned by their peers when attempting to do popular writing or communication for the general public. 

In 'The Stoic Challenge' author William Irvine discusses Danial Kahneman's challenges with writing a popular book: 

"Kahneman was warned that writing a popular book would cause harm to his professional reputation...professors aren't supposed to write books that normal people can understand."

He describes, when Kahneman's book Thinking Fast and Slow made the New York Times best selling list Kahneman "sheepishly explained to his colleagues that the book's appearance there was a mistake."

In an EconTalk interview with economist Steven Levitt, Russ Roberts asks Levitt about writing his popular book Freakonomics:

"What was the reaction from your colleagues in the profession...You know, I have a similar route. I'm not as successful as you are, but I've popularized a lot of economics...it was considered somewhat untoward to waste your time speaking to a popular audience."

Levitt responded by saying the reaction was not so bad, but the fact that Russ had to broach the topic is evidence of the toxic culture that academics face when doing science communication. The negative stigma associated with good science communication is not limited to economics or the social and behavioral sciences. 

In his Talking Biotech podcast episode Debunking the Disinformation Dozen, scientist and science communicator Kevin Folta discusses his strident efforts facing off these toxic elements:

"I have always said that communication is such an important part of what we do as scientists but I have colleagues who say you are wasting your time doing this...Folta why are you wasting your time doing a podcast or writing scientific stuff for the public."

Some of this is just bad behavior, some of it is gatekeeping done in the name of upholding the scientific integrity of their field, some of it is the attempt of others to prove their competence to themselves or others, and maybe some of it is the result of people genuinely trying to provide peer review to their colleagues that they think have gone astray. But most of it is unhelpful when it comes to influencing decision makers or improving general scientific literacy. It doesn't matter how great the discovery, how impactful the findings, we have all seen from the pandemic that effective science communication is critical for overcoming the effects of misinformation and disinformation. A culture that is toxic toward effective science communication becomes an impediment to science itself and leaves a void waiting be filled by science deniers, activists, policy makers, decision makers, and special interests.

This can be challenging when you add the Dunning-Kruger effect to the equation. Those that know the least may be the most vocal while scientists and those with expertise sit on the sidelines. As Bryan Caplan states in his book The Myth of the Rational Voter:

"There are two kinds of errors to avoid. Hubris is one, self abasement is the other. The first leads experts to over reach themselves; the second leads experts to stand idly by while error reigns."

How Does Culture and Mental Accounting Impact Science Communication?

So as I've written above, in the scientific community there is sort of a toxic culture that inhibits good science communication. In the Two Psychologists Four Beers podcast  (WARNING: the intro of this podcast episode may contain vulgarity) behavioral scientist Nick Hobson makes an interesting comparison between MBAs and scientists. 

"as scientists we need to be humble with regards to our data...one thing we are learning from our current woes of replication (the replication crisis) is we know a lot less than we think. This has conditioned us to be more humble....vs. business school people that are trained to be more assertive and confident."

I'd like to propose an analogy relating to mental accounting. It seems like when a scientist gets their degree it comes with a mental account called scientific credibility. Speaking and writing to a general audience risks taking a charge against that account, and they are trained to be extremely frugal about managing it. Communication becomes an exercise in risk management.  If they say or communicate something that is of the slightest error, missing the slightest nuance, a colleague may call them out.  Gotcha! Psychologically, this would call for a huge charge against their 'account' and reputation. Its not quite a career ending mistake like making a fraudulent claim or faking data, but it's bad enough to be avoided at great cost. MBAs don't have a mental account called scientific credibility. They aren't long on academic credibility so they don't require putting on the communication hedges the way scientists often do. They come off as better communicators and more confident while scientists risk becoming stereotyped as unable to be effective communicators. 

To protect their balance at all costs and avoid social harassment from their peers, economists and scientists may tend to speak with caveats, hedges, and qualifications. This may also mean a delayed response. Before even thinking about communicating results in many cases requires in depth rigorous analysis, sensitivity checks etc. It requires doing science which is by nature slow while the public wants answers fast. Faster answers might mean less time for analysis which calls for more caveats. This can all be detrimental to effective communication to non-technical audiences. Answers become either too slow or too vague to support decision making (recall Torsten Slok's comments above). It gives the impression of a lack of confidence and relevance and a stereotype that technical people (economists, scientists, data scientists etc.) fail to offer definitive or practical conclusions. As Bryan Caplan notes discussing the role of economists in The Myth of the Rational Voter:

"when the media spotlight gives other experts a few seconds to speak their mind, they usually strive to forcefully communicate one or two simplified conclusions....but economists are reluctant to use this strategy. Though the forum demands it they think it unseemly to express a definitive judgement. This is a recipe for being utterly ignored."

Students graduating from economics and science based graduate programs may inherit these mental accounts and learn these 'hedging strategies' from their professors, from the program, and the seminar culture that comes with it.

Again, Nick Hobson offers great insight about how to deal with this kind of mental accounting in his own work:

"what I've wrestled with as I've grown the business is maintaining scientific integrity and the rigor but knowing you have to sacrifice some of it....you have to find and strike a balance between being data driven and humble while also being confident and strategic and cautious about the shortcuts you take."

In Thinking Fast and Slow, Kahneman argues that sometimes new leaders can produce better results because fresh thinkers can view problems without the same mental accounts holding back incumbents. The solution isn't to abandon scientific training and the value it brings to the table in terms of rigor and statistical and causal reasoning. The solution is to learn how to view problems in a way that avoids the kind of mental accounting I have been discussing. This also calls for a cultural change in the educational system. As Kevin Folta stated in the previous Talking Biotech Podcast:

"Until we have a change in how the universities and how the scientific establishment sees these efforts as positive and helpful and counts toward tenure and promotion I don't think you are going to see people jump in on this." 

Given graduate and PhD training may come with such baggage, one alternative may be to develop programs with more balance, like Professional Science Master's degrees or at least create courses or certificates that focus on translational knowledge and communication skills. Or seek out graduate study under folks like Dr. Folta who are great scientists and researchers that can also help you overcome the barriers to communicate science effectively. If that is the case we are going to need more Dr. Folta's.

References:

The Myth of the Rational Voter: Why Democracies Choose Bad Policies. Bryan Caplan. Princeton University Press. 2007.

The stoic challenge : a philosopher's guide to becoming tougher, calmer, and more resilient. William Braxton Irvine. Norton & Co. NY. 2019

The Analytics Lifecycle Toolkit: A Practical Guide for an Effective Analytics Capability. Gregory S. Nelson. 2018.

Thursday, April 1, 2021

The Value of Business Experiments Part 3: Innovation, Strategy, and Alignment

In previous posts I have discussed the value proposition of business experiments from both a classical and behavioral economic perspective. This series of posts has been greatly influenced by Jim Manzi's book 'Uncontrolled: The Surprising Payoff of Trial-and-Error for Business, Politics, and Society.' Midway through the book Manzi highlights three important things that experiments in business can do:

1) They provide precision around the tactical implementation of strategy
2) They provide feedback on the performance of a strategy which allows for refinements to be driven by evidence
3) They help achieve organizational and strategic alignment

Manzi explains that within any corporation there are always silos and subcultures advocating competing strategies with perverse incentives and agendas in pursuit of power and control. How do we know who is right and which programs or ideas are successful considering the many factors that could be influencing any outcome of interest? Manzi describes any environment where the number of causes of variation are enormous as an environment that has 'high causal density.' We can claim to address this with a data driven culture, but what does that mean? Modern companies in a digital age with AI and big data are drowning in data. This makes it easy to adorn rhetoric in advanced analytical frameworks. Because data seldom speaks, anyone can speak for the data through wily data story telling.

As Jim Manzi and Stefan Thomke discuss in Harvard Business Review:

"business experiments can allow companies to look beyond correlation and investigate causality....Without it, executives have only a fragmentary understanding of their businesses, and the decisions they make can easily backfire."

In complex environments with high causal density, we don't know enough about the nature and causes of human behavior, decisions, and causal paths from actions to outcomes to list them all and measure and account for them even if we could agree how to measure them. This is the nature of decision making under uncertainty. But, as R.A. Fisher taught us with his agricultural experiments, randomized tests allow us to account for all of these hidden factors (Manzi calls them hidden conditionals). Only then does our data stand a chance to speak truth.

In Dual Transformation: How to Reposition Today's Business While Creating the Future authors discuss the importance of experimentation as a way to navigate uncertainty in causally dense environments in what they refer to as transformation B:

“Whenever you innovate, you can never be sure about the assumptions on which your business rests. So, like a good scientist, you start with a hypothesis, then design and experiment. Make sure the experiment has clear objectives (why are you running it and what do you hope to learn). Even if you have no idea what the right answer is, make a prediction. Finally, execute in such a way that you can measure the prediction, such as running a so-called A/B test in which you vary a single factor."

Experiments aren't just tinkering and trying new things. While these are helpful to innovation, just tinkering and measuring and observing still leaves you speculating about what really works and is subject to all the same behavioral biases and pitfalls of big data previously discussed.

List and Gneezy address this in The Why Axis:

"Many businesses experiment and often...businesses always tinker...and try new things...the problem is that businesses rarely conduct experiments that allow a comparison between a treatment and control group...Business experiments are research investigations that give companies the opportunity to get fast and accurate data regarding important decisions."

Three things distinguish a successful business experiment from just tinkering:

1) Separation of signal from noise through well designed and sufficiently powered tests
2) Connecting cause and effect through randomization 
3) Clear signals on business value that follows from 1 & 2 above

Having causal knowledge helps identify more informed and calculated risks vs. risks taken on the basis of gut instinct, political motivation, or overly optimistic data-driven correlational pattern finding analytics. 

Experiments add incremental knowledge and value to business. No single experiment is going to be a 'killer app' that by itself will generate millions in profits. But in aggregate the knowledge created by experiments probably offers the greatest strategic value across an enterprise compared to any other analytic method.

As discussed earlier, business experiments create value by helping manage the knowledge problem within firms, it's worth repeating again from List and Gneezy:

"We think that businesses that don't experiment and fail to show, through hard data, that their ideas can actually work before the company takes action - are wasting their money....every day they set suboptimal prices, place adds that do not work, or use ineffective incentive schemes for their work force, they effectively leave millions of dollars on the table."

As Luke Froeb writes in Managerial Economics, A Problem Solving Approach (3rd Edition):

"With the benefit of hindsight, it is easy to identify successful strategies (and the reasons for their success) or failed strategies (and the reason for their failures). It's much more difficult to identify successful or failed strategies before they succeed or fail."

Again from Dual Transformation:

"Explorers recognize they can't know the right answer, so they want to invest as little as possible in learning which of their hypotheses are right and which ones are wrong"

Business experiments offer the opportunity to test strategies early on a smaller scale to get causal feedback about potential success or failure before fully committing large amounts of irrecoverable resources. This takes the concept of failing fast to a whole new level. As discussed in The Why Axis and Uncontrolled, business experiments play a central role in product development and innovation across a range of industries and companies from Harrah's casinos, Capital One, and Humana who have been leading in this area for decades to new ventures like Amazon and Uber. 

"At Uber Labs, we apply behavioral science insights and methodologies to help product teams improve the Uber customer experience. One of the most exciting areas we’ve been working on is causal inference, a category of statistical methods that is commonly used in behavioral science research to understand the causes behind the results we see from experiments or observations...Teams across Uber apply causal inference methods that enable us to bring richer insights to operations analysis, product development, and other areas critical to improving the user experience on our platform." - From: Using Causal Inference to Improve the Uber User Experience (link)

Achieving the greatest value from business experiments requires leadership commitment.  It also demands a culture that is genuinely open to learning through a blend of trial and error, data driven decision making informed by theory, and the infrastructure necessary for implementing enough tests and iterations to generate the knowledge necessary for rapid learning and innovation. The result is a corporate culture that allows an organization to formulate, implement, and modify strategy faster and more tactfully than others.

See also:
The Value of Business Experiments: The Knowledge Problem
The Value of Business Experiments Part 2: A Behavioral Economics Perspective
Statistics is a Way of Thinking, Not a Box of Tools