Bootstrapped decision-making models: When
Posted: October 29, 2011 | Author: admin | Filed under: tools | Tags: bootstrap, bootstrapping, decision-making, process, random error, risk | Leave a comment »———————————————————————————————-
In practice, bootstrapped models have many uses. The most scalable way to use these models are for informal analysis “gut checks.” The human mind works in funny ways. Simply having a parallel bootstrapped model to casually refer to can quell fears or heighten suspicions, potentially improving even a poorly codified decision-making processes.
Performance benchmarking is another useful area for bootstrapped models. By removing random error, a bootstrapped model can serve as a useful benchmark for analyzing and evaluating decisions to figure out what went wrong (or what went right).
Risk management is another interesting application. Hard-coding checks to catch decisions that lie outside a predefined range of outcomes/values (derived from bootstrapped models) is a direct way to catch exaggerated instances of an analyst’s natural human inconsistency. And it’s fundamentally different from a purely quantitative risk management methodology, since the measure of risk is a deviation from what a human would have theoretically done following her or his own approach.
Finally, one could go all in and build fully employed decision making tools from bootstrapped models. The resulting tools would loosely resemble a true quant model, though using a model in this manner still requires updating the model as the human perspective changes.
Bootstrapped decision-making models: Why
Posted: October 28, 2011 | Author: admin | Filed under: tools | Tags: bootstrap, bootstrapping, decision-making, innovation, qualitative | Leave a comment »—————————————————————————————-
What’s better – using pure quantitative models or more qualitative, human based decision-making?
Who knows. Regardless of the answer, one should consider an analytical technique that combines the attractive aspects of both approaches: the use of “bootstrapped” models.
Building a bootstrapped model involves taking a set of decisions made by a human, breaking them down (statistically) to see which decision criteria are the key drivers and then using that criteria to re-build (or “bootstrap”) a quantitative model that closely resembles the human knowledge that was mined in the first place. This is not nearly as circular as it sounds. In theory, the rebuilt model should take the best of human decision-making and smooth some of the inconsistencies that are natural to human behavior.
Why does this work?
Humans are not perfect. The strength of human decision-making (in my humble opinion) is the mind’s ability to port concepts from different areas and apply this “unrelated” learning to new situations. This is in effect a description of innovation, and humans (along with nature, more broadly) have proven adept at pursuing this. Human decisions, over time, lead to innovative outcomes and insight.
However, this attribute comes with a steep price: inconsistency. Quantitative models will always win in terms of consistency. In many decision-making situations that humans face, the consistency of the decisions can make or break results, so quantitative approaches have a key attribute that cannot be ignored. Because a quantitative model is inherently limited by the inputs and weighting logic of decision criteria, the following question arises: are quantitative models too rational for their own good?
Bootstrapped models can rectify these shortcomings by finding a middle ground. It is a classic case of leveraging human insight through quantitative approach (as opposed to building a quantitative model and then using human insight to fix it when it breaks – that’s a much different beast).