Posted on May 16, 2016 @ 08:29:00 AM by Paul Meagher
I posted a number of recent blogs on the lens model. When you need to make a judgment about something that involves uncertainty (e.g., whether to invest in a startup), you can construct a lens model to represent the indicators you think are relevant to making that decision (e.g., good team, good business plan, meeting milestones, rate of new customers, potentially profitable, etc...). You can construct a lens model for many different types of judgments.
In today's blog I want to talk about a tool you can use to create a lens model diagram. The tool is a free opensource piece of software called Graphviz. You can download Graphviz to your computer and generate high quality graph visualizations, or you can paste your graph recipes into this online tool for generating graphs. Note that the online graphs are lower resolution that what you get if you download the software.
Graphviz is software for visualing graphs that are specified using the DOT language. The DOT language only includes a few features so it is quick to get started with. Here is a recipe you can use to get started with making your own lens diagrams.
world -- indicator1[label="0.6",penwidth=3];
world -- indicator2[label="0.3",penwidth=2];
world -- indicator3[label="0.1",penwidth=1];
indicator1 -- judge[label="0.6",penwidth=3];
indicator2 -- judge[label="0.3",penwidth=2];
indicator3 -- judge[label="0.1",penwidth=1];
judge -- world[label="Accuracy"];
When I paste this graph recipe into the GVEdit program that comes with the Graphviz software, and hit the "Run" button, it generates this generic lens diagram:
I could, for example, modify this graph for a salary estimation problem.
Say I wanted to estimate the salary of a professor in a university setting. I could identify several salary indicators such as number of publications, professorial level, teacher rating, age and other factors, and use these indicators to construct an estimate which could then be compared to the professors actual salaries to assess accuracy. It might be some such formula that is used to actually determine the professors salaries and I could be more or less calibrated to the factors involved depending on the indicators I used and whether I included and weighted each indicator properly. In the diagram above, the cue utilization validities (numbers appearing above the lines from Indicators to Judge) exactly correspond to the ecological validities of the indicators (numbers appearing above the lines from World to Indicators) which is what perfectly calibrated judgment would look like.
Researchers have been using the lens model to study judgment and decision making for over a half-century now. Lens models are probably not used as much as they could be as a strategy to deal with uncertainty. If you want to use them then you might want to have a tool that allows you to easily construct a lens model for whatever judgment or decision problem interests you. The DOT language and the Graphviz software allow you to quickly construct a lens model of that judgment problem if you so desire.
Posted on May 6, 2016 @ 08:35:00 PM by Paul Meagher
I never really gave much thought to the practical importance of the philosophical distinction between correspondence and coherence theories of truth until I read Kenneth Hammond's book Human Judgment and Social Policy: Irreducible Uncertainty, Inevitable Error, Unavoidable Injustice (1996). It turns out that in research on judgment and decision making the distinction is very important because it defines what researchers consider "good" or "correct" judgment and decision making. For someone subscribing to a coherence theory of truth, the truth of a statement is determined by how well it fits with other things we take to be true, such as probability theory.
Nobel Laureate Daniel Kahneman in his best-selling book Thinking, Fast and Slow (2011) discusses a variety of experiments that purport to demonstrate how poorly humans often reason because their reasoning does not accord with the rules of probability theory. The experiments demonstrate many different types of biases (anchoring, framing, availability, recency, etc...) that human reasoning is subject to based on their disagreement with the rules of probability theory.
Professor Kenneth Hammond, and before him, his mentor Egon Brunswik, were not big fans of the coherence theory of truth. They preferred a correspondence theory where the truth of a statement is determined by whether it corresponds to the facts. They believed that our access to the facts was often mediated by multiple fallible indicators.
We may not be able to verbalize some of the indicators we use in our judgment, or how we are combining them, but our intuitive understanding can lead to accurate judgments about the world even if we don't have a fully coherent account of why we believe what we do. Often the judgment rule turns out to be a simple linear model that combines information from multiple fallible indicators. Experiments in this tradition involve people making judgments about states of the world based upon indicator information and examining the accuracy of their judgments, the ecological validity of the indicators, and whether judges utilize the indicator in a way that corresponds to its ecological validity.
So how you conduct and interpret experiments in judgment and decision making are affected by whether you believe correspondence theories of truth are superior to coherence theories of truth and vice versa. They are metatheories that determine the specific theories we come up with and how we study them.
The distinction is relevant to entrepreneurship. For example, a business plan is arguably a document designed to present a coherent account of why the business will succeed. If you've ever questioned the value of a business plan it could be because it is a document that is judged based upon coherence criteria but the actual success of the startup will depend upon whether the startup's value hypothesis and growth engine hypothesis corresponds with reality. Eric Ries, in his best selling and influential book, The Lean Startup (2011) discussed many techniques for validating these two hypothesis. Although he does not discuss the correspondence theory of truth as his metatheory, it is pretty obvious he subscribes to it.
In practice, the correspondence theory of truth often involves defining and measuring indicators and making decisions based on these indicators. In the lean startup, Eric Ries advocates looking for indicators to prove that your value hypothesis is true. If the measured indicators don't prove out your value hypothesis you many need to start pivoting until you find a value hypothesis that appears correct according to the numbers. If your value hypothesis looks good, then you will need to validate your growth hypothesis by defining and measuring key performance indicators for growth. The lean startup approach is very experiment and measurement driven because it is a search for correspondence between the value and growth hypothesis and reality.
This diagram should actually be two lens models, one for the value hypothesis and one for the growth hypothesis. I'm being lazy. The lens model for the value hypothesis asks what indicators can we use to measure whether our product or service delivers the value that we claim it does. The lens model for the growth hypothesis asks what indicators we can use to measure whether our growth engine is working. You should read the book if you want examples of how indicators of value and growth were defined, measured and used in the various startups discussed.
One reason why the lean startup theory is useful is because success in starting a business is defined more in terms of correspondence with reality than coherence with other beliefs that we might hold to be true. There are lots of situations where the coherence theory of truth might be useful, such as narratives about the meaning of life and social interactions where truth is a matter a perception and plausible story telling, but that does not get you very far if you are a startup or running a business. Correspondence is king.
If correspondence is king, you might find the lean startup lens model above offers a simple visualization that can be used to remind you of how accurate judgments regarding the value and growth hypothesis for startups are achieved.
Posted on May 4, 2016 @ 08:49:00 AM by Paul Meagher
Two topics that I like to blog about are lens models and decision trees. Today I want to offer up suggestions for how lens models might be constructed from decision trees.
Recall that a lens model looks something like this (taken from this blog):
Recall also that a fully specified decision tree looks something like this (taken from this blog):
Notice that the decision tree includes two factors: how much nitrogen to apply (100k, 160k or 240k per acre) and quality of the growing season (poor, average, good). In the context of a lens model, these might be viewed as indicators of what the yield might be at the end of growing season. In other words, if the "intangible state" we are trying judge is the amount of corn we will get at the end of a growing season, then two critical indicators are how much nitrogen is applied and what the quality of the growing season will be like (which in turn might be indicated by the amount of rain). We have control over one of those indicators (how much nitrogen to apply) but not the other (what the weather will be like). The main point I want to make here is that it is relatively easy to convert a decision tree to a lens model by making each factor in your decision tree an indicator in your lens model.
I don't want to get into the technical details of how decisions tree algorithms work but in general they work by recording various "features" that are associated with a target outcome you are interested in. For example, if you want to make a decision about whether a c-section will be required to deliver a baby, you can look at all the c-section births and all the non c-section births and record standardized information about all those cases. Then you start looking for the best feature that discriminates between c-section and non c-section births. That feature will likely not be a perfect discriminator so you take all the remaining cases where you used the best feature to sort cases and then use the next best feature to discriminate between cases that require c-section births and non c-section births. If you do this you come up with the decision tree shown below which can be captured more simply in an if-then rule which is also shown below:
We can construct a lens model from this tree, or from the in-then rule, where each of the three factors is an indicator in our lens model. If we use the thickness of the line connecting the judge to the indicator to represent the strength of the relationship, the first indicator would have a thicker line than the second indicator which would be thicker than the third indicator. The first indicator captures the most variance followed by the second followed by the third. This is how algorithms that generate decision trees work so when we construct lens models based on them, we should expect them to have a certain form.
The point of this blog is to show that there are several formal techniques we might use to generate a lens model. Multiple linear regression is one previously discussed technique. Today I discussed the use of decision tree algorithms as another technique. A decision tree algorithm also suggests a plausible psychological strategy for coming up with indicators; namely, pick an indicator that accounts for most of the target cases. If there are some cases it doesn't handle, pick another indicator that might filter out the more of the cases it doesn't handle, and so on. You might not have to use many indicators before you arrive at a set of indicators that captures enough of the data to satisfy you.
Multiple linear regression and decision tree algorithms are two formal techniques you can use to make the indicators used in judgement explicit and which offer up concrete approaches to thinking about how common sense, which we often find difficult to explain, might work and be improved upon. Doctors making decisions about c-sections might have relied upon common sense which included consideration of the factors studied but the formal techniques helped to identify the relevant indicators and the overall strength of the relationship between the indicators and the need for a c-section. Where multiple regression is a more wholistic/parallel method of finding indicators, decision tree learning algorithms strike me as a more analytic/sequential method of finding judgement indicators.
Below is a lecture by machine learning guru Tom Mitchell on decision tree learning that is set to start with him discussing the c-section example.
So you have a piece of music notation and you ask a bunch of musicians to convey certain emotions with that music and, correspondingly, you ask listeners to state what emotion is being conveyed by the performed music piece. What Juslin's research showed is that skilled music performers are pretty good at expressing emotion using various performance cues (tempo, loudness, spectrum, articulation, etc...) that signal the emotion they were asked to convey because music listeners were pretty good at reporting on what the intended emotion was based on the musician's performance cues.
Juslin's lens model of emotional communication in music is interesting for a couple reasons:
It offers a useful example of how the lens model can be used to understand uncertainty associated with performance and not just uncertainty associated with perception and judgment. In the case of music, there can be uncertainty associated with the means you should use to achieve a desired emotional effect. The conveyance of emotion through branding might be conceptualized in a similar manner and whether people pick up on the emotion(s) your brand is trying to convey would have to be tested by whether potential consumers report some of the keywords associated with the emotions your brand is trying to convey. Brunsik's probabilistic functionalism included the idea that there is uncertainty involved the selection of means to achieve goals but there hasn't been nearly as much research on the performance side of the lens model as the perception/judgment side. This diagram offers up an example of how the lens model can be applied to performance situations.
The diagram also shows how the performer and the listener each have their own lens - the performer trying to communicate emotion (output side) and the listener implicitly trying to glean the emotional intent from the performance (input side). In Juslin's study there was success in communicating emotional intent through music performance to listeners, but what happens if you are not so skilled or the listener is taking in the music in a loud bar under the influence of alcohol? There might be little emotional communication between the performer and the listener in such circumstances. Or their might be compensation by the skilled musician to the loud bar context that involves using other cues to achieve the emotional effects they are looking for. Brunswik called the flexible use of alternative cues/means in judgment/performance vicarious functioning and it was a very important idea in his probabilistic functionalism framework.
The lens model in the above form might offer up a framework to understand the relationship between an entrepreneur pitching an investment opportunity and an investor picking up on the cues that suggest that it is a good investment opportunity. There are a variety of semi-reliable indicators of success and character that an entrepreneur tries to communicate to an investor in an investment pitch which an investor may or may not pick up on. Even if the investor picks up on them, they have their own lens model of what constitutes a good investment or character that may not correspond in the first place with what an entrepreneur is trying to communicate in their pitch. The lens model as extended by Juslin provides a framework for both understanding and studying what goes into successful pitching as it emphasizes studying both the performance and the perceptional aspects of cue utilization in pitching investment opportunities. Some of the cues that go into successful face-to-face investment pitching (e.g., novelty, prizing, status, attention control, etc...) have been discussed anecdotally by Oren Klaff in his book Pitch Anything, but could perhaps be studied more rigorously using the lens model framework.
Posted on April 19, 2016 @ 07:44:00 AM by Paul Meagher
In my last blog (The Lens of Common Sense) I discussed the Lens Model of judgment in more depth and the idea that one way to implement the lens model is by using multiple linear regression. In today's blog I want to follow up on that idea and show how useful a lens model can be for the purposes of real estate appraisal, and by implication, many other domains that involve judgment under uncertainty.
The data that I want to show you came from an old stats textbook (p. 727) and was provided by a real estate appraisal company who were asked to help an apartment building owner fight a property tax bill. The owner felt that the tax bill was too high and the appraisal company was brought in to formalize the owner's intuition and help argue the owner's case.
The appraisal company randomly selected 25 apartment buildings that were sold in 1990. The data was organized according to 5 indicators of worth along with the apartment building sales price:
The procedure the appraiser used to determine whether the owner was paying too much was to generate a linear model from this data using multiple linear regression. The linear model looked something like this:
Sales Price = X + (Weight1 * Num Apt. Units) + (Weight2 * Age of Structure) + (Weight3 * Lot Size) + (Weight4 * Num Parking Spaces) + (Weight5 * Gross Building Area)
The appraiser then applied the linear model to data from the owner's apartment to arrive at an estimate of it's probable sales price. Any significant discrepancy between the predicted sales price and property tax valuation could be argued to be unfair.
So what we have here is a situation where the owner believed the value of the apartment building was assessed too highly. Why did the owner believe this? Were they able to verbalize all the cues they were using to arrive at that judgment? The real estate appraiser arguably used a formal statistical tool, namely, multiple linear regression, to make the apartment owner's common sense model explicit and probably also improved upon it.
The purpose of today's blog is get a bit more down to earth with the lens model than my last blog and to perhaps convince you that the lens model is a useful "mind tool" for understanding and improving judgment under conditions of uncertainty. One way to interpret and apply the lens model is by using the statistical technique of multiple linear regression which allows you to estimate the weights that should be applied to each indicator in your lens model. There is alot of evidence that if you do this for something you have to make frequent probabilistic judgments about, your lens model will outperform you! Humans lack consistency of judgment but a formalized lens model always outputs the same numbers given the same inputs. Lack of consistency in judgment is one explanation for why a formalized lens model (for judgments under uncertainty) exhibits superior performance to a person's common sense lens model.
Posted on April 18, 2016 @ 07:58:00 AM by Paul Meagher
In 3 recent blogs (1, 2, 3) I've been discussing the Lens Model which was proposed by the psychologist Egon Brunswik (1903-1955) as a way to simultaneously understand how an organism relates to world and how we might go about researching and designing experiments to understand that relationship.
The Lens Model has been used and applied in various domains of psychology (perception, decision making, social judgment, etc...) since Brunswik first proposed it. The person most responsible for promoting the lens model after Brunswik's death in 1955 was professor Kenneth R. Hammond (1917 - 2015) so to explore the lens model in more detail I tracked down Kenneth's most highly cited book, Human Judgment and Social Policy: Irreducible Uncertainty, Inevitable Error, Unavoidable Injustice (1996), which includes a discussion of Brunswik's contributions, the lens model, and many other topics. In 1997 this book won the Outstanding Research Publication Award from the American Educational Research Association. It deserves the recognition and I highly recommend it to anyone with an interest in judgment and decision making. Hammond was almost 80 when he wrote the book and offers many insights into the philosophical and scientific basis of judgment and policy. He wrote 2 more books after this one.
In today's blog I want to focus more narrowly on Kenneth's discussion of the Lens Model and how information from cues is organized. I'll begin by displaying Ken's version of the Lens Model which appeared on page 168 of his book:
There are four things I want you to notice regarding Kenneth's version of the lens model:
Kenneth prefers to use the term "indicators" rather than "cues". In this version of the lens model the organism's judgment about some intangible aspect of world is mediated by Multiple Fallible Indicators.
The degree of validity between an indicator (e.g., obesity) and some intangible state of the world (e.g., diabetes) is depicted by the thickness of the line connecting them.
The degree to which an indicator (e.g. obesity) is utilized in making a judgment about the world (e.g., person has diabetes) can also be depicted by thickness of the line connecting them. The ecological validity of an indicator may not be matched by a corresponding degree of utilization of that indicator in making a judgment (i.e., line thickness may change as it passes through the lens).
There is an arc that runs from "Judgment" to the "Intangible State" that is being judged. This functional arc is a measure of the "Accuracy of Judgment". Brunswik labelled the functional arc with the word "Achievement" but Hammond had a particular theoretical axe to grind in this book (correspondence vs coherence theories of truth) and preferred the phrase "Accuracy of Judgment" to stress the importance of correspondence over rational coherence in accounting for "Achievement".
To more fully understand the lens model we need to understand how the information from multiple fallible indicators is combined to yield a judgment. Here is Ken explaining how this happens:
One feature of the lens model is its explicit representation of the cues used in the judgment process. Although such diagrams are useful, they do not show one of the most important aspects of the judgment process - the organizing principles, the cognitive mechanism by which the information from multiple fallible indicators is organized into a judgment. One such principle is simply "add the information". Thus, if the task is selecting a mate, and, on a scale from 1 to 10 cue No. 1 (wealth) is a 5, cue No. 2 (physique) is a 7, and cue No. 3 (chastity) is a 2, the organism simply adds these cue values and reaches a judgment of 14 (where the maximum score is 30). Another principles involves averaging the cue values; another principle requires weighting each cue according to its importance before averaging them. An interesting and highly important discovery, first introduced to judgment and decision making researchers by R.M. Dawes and B. Corrigan, is that organizing principles of this type will be extremely robust in irreducibly uncertain environments. That is, if (1) the environmental task or situation is not perfectly predictable (uncertain), (2) there are several fallible cues, and (3) the cues are redundant (even slightly), then (4) these organizing principles (it doesn't matter which one) will provide the subject with a close approximation to the correct inference about the intangible state of the environment, no matter which organizing principle may actually exist therein - that is, even if the organism organizes the information incorrectly relative to the task conditions!
I cannot overemphasize the importance of the robustness of what the professionals call the linear model. "Robustness" means that this method of organizing information into a judgment is powerful indeed. Any organism that possesses a robust cognitive organizing principle has a very valuable asset in the natural world, or in any information system involving multiple fallible indicators and irreducible uncertainty. Its value is twofold:
1. It allows one to be right for the wrong reason - that is, one can make correct inferences even if the principles used to organize the information is not the correct one (one may exhibit correspondence competence without correct knowledge of the environmental system and without coherence competence).
2. One does not have to learn what the correct principle is in order to make almost correct, useful inferences. This conclusion suggest that learning was not an important cognitive activity in the early days of Homo sapiens. Whether early Homo sapiens learned this robust organizing principle or were endowed with it - that is, their biological make up included it from the very beginning - I cannot say, of course, nor can anyone else. I can say, however, that any organism possessing such a robust principle would have - and I will insist, did have - an evolutionary advantage over any organism that relied on a more analytical - and thus less robust and more fragile - organizing principle.
Because we can arrive at accurate judgments about the world with a lens model that uses one of these simple organizing principles, Ken Hammond, and Egon Brunsik before him, argued that alot of our thinking is "quasirational". It occupies a middle ground between pure intuition and a fully-coherent rational explanation. Hammond argues that this quasirational thinking is what people are referring to when they use the term "common sense". Where many have argued that entrepreneurship and investing are matters of either intuition (system 1) or sophisticated rational models (system 2), Hammond and Brunswik are arguing that many entrepreneurial and investment judgments, because they operate in an environment of extreme uncertainty, occupy a quasirational middle ground which a lens model attempts to capture. Common sense is used to organize information from multiple fallible indicators into a judgment that is often accurate.
A final comment to make on the lens model is to point out that it was developed during an historical period of time when multiple correlation and multiple regression statistical techniques were being pioneered and introduced into academic research. Gerd Gigerenzer has argued (link to PDF article) that statistical tools often get turned into psychological theories (i.e., tools to theories heuristic) so that one might view the lens model as the type of psychological model you get when you generalize the importance of multiple correlation and multiple linear regression techniques and ideas. Multiple correlation and multiple linear regression are often used to create and evaluate lens models. While the lens model can sometimes be usefully equated with multiple linear regression, part of Brunsik's inspiration for the lens model was how our senses combine information from multiple fallible cues to arrive at accurate perceptual judgments. Further development of the lens model might take inspiration from nature (i.e., how vision, hearing, touch, smell, and taste combine multiple fallible cues to yield accurate judgments) to find additional organizing principles.
I just finished planting some grape cuttings in my pit greenhouse and these are the 1 yr old grape vine cuttings I have planted out so far:
As I was preparing the vine cuttings to be planted in my greenhouse, I began thinking about how to apply the lens model to the problem and
came up with the following lens model:
I created the diagram using the free draw.io web application which is a tool I highly recommend for creating diagrams. In my overall scheme for growing 1 yr old grape vines this year, creating the cuttings is one critical part where I have a choice to do it in several different ways so at to achieve a maximum number of viable 1 year old grape vines. The policy I have chosen may lead to the desired goal, however, it is possible that creating longer canes and leaving more buds on the vine would lead to a greater number of viable 1 year old grape vines. I have had some success in the past with my minimalist approach so went with this approach again but decided to formalize the rules a bit more this year. I do not use a ruler when measuring sizes so when I say 8 inches I really mean my subjective perception of the size of the cutting is around 8 inches. This size limitation means I can comfortably fit my cuttings in a common cat litter container I have around. I like to soak the cuttings in water for awhile before planting out and this rectangular container keeps the cuttings oriented in the correct direction when I soak them.
This particular lens model only captures an aspect of what is required to maximize the production of 1 yr old grape vines. In the expanded diagram below I begin to hint at some of the other factors that are important, each one of which would have its own set of simple rules designed to yield a maximum number of 1 yr old grape vines.
It comes as no surprise to me that there might be multiple rules arranged hierarchically that are required to achieve some high level goal. That is usually how higher level goals are accomplished. Often what happens is that if you have been performing some goal oriented activity for awhile alot of these lower level steps become routinized and when we come up with our lens models they refer to higher level requirements for achieving our goal. If you have to teach someone else to grow 1 yr old grape vines, however, you have to begin to break things down like this and in the process you might question whether your approach is really the best one for achieving the goal you want to achieve.
I will ultimately find out if my approach will maximize the production of 1 yr old grape vines if I don't end up leaving the door closed on the greenhouse on a sunny day and baking some plants. Alot of things have to go right in order for me to determine if my approach to preparing cuttings is the best approach to achieving maximum production of 1 yr old grape vines
It is useful to vary the conditions of your experiment to the extent that you are able in order to determine what conditions maximize productivity. I had some soil that had leaf mulch on top that I covered with potting soil. I ran out of potting soil and decided to grow some cuttings in the leaf mulch without any potting soil on top. The leaf mulch might maintain cooler soil conditions than having the soil exposed so I'll be looking for differences in growth that might be attributable to the soil exposed +- factor. Unfortunately I only have one variety of grape vine cutting planted in the leaf mulch. It would be better to be testing growth of the same vine type with and without leaf mulch. The lens model can be paired with some theory about how to conduct representative and generalizable experiments to refine your lens model(s). That was not followed in this case :-)
It is not difficult to come up with an action-oriented lens model and diagram it out. You do, however, have to get clear about what your goal is and the means you will be selecting to try to achieve it. The act of representing goal + means relationships via lens diagrams might be useful. To establish the ecological validity of your preferred means you should consider varying your means to see if they yield an outcome as good or better than your preferred approach. Some of the cutting length/number of buds experimentation occurred in previous years so this year I wanted to push the envelope a bit more in terms of maximizing 1 yr old grape vine yield in my greenhouse.
Posted on April 5, 2016 @ 07:43:00 AM by Paul Meagher
In my last blog I introduced you to the lens model. In today's blog I want to expand upon the lens model by introducing you to another important diagram that the founder of the lens model, Egon Brunswick, also used to explain the lens model.
The reason I find it necessary to expand the lens model is that I was trying to apply the lens model to the problem of preparing grape vine cuttings to achieve the maximum number of propagated vines. This happens to be a task I'm occupied with at the moment.
The problem I ran into was that the pruning policies I followed in preparing the cuttings are not best described as "cues" emitted from the environment, they are rather the "means" I have chosen to achieve a goal. The lens model seems to be more focused on accounting for "perception" than "action".
Further examination of the lens model, however, reveals that the lens model as I presented it yesterday is only the left hand side of a larger model that Egon Brunswick offered to explain the relationship between the organism and the environment. Here is the expanded lens model:
Note the perfect bi-lateral symmetry of the model. Note also that "cues" stand between the organism and the environment on the perception end (input side) and that "means" stand between the organism and the environment on the "action" end (output side). This would seem to imply that everything I said yesterday about "cues" mediating between the world (or distal object) and the observer also applies to the "means" that mediate between some goal object and the observer.
In other words, to achieve some goal object we must select the means to get there. The lens model applies to situations where the means to achieving some goal in the future is not certain so we choose "means" that we think will get us there but which, in reality, might not have a strong relationship to achieving the goal. We have an internal model that we use to explain the relationship between the means selected and the goal object that may not in fact be the best means we could have chosen to achieve that goal object. It is a happy day when the means we have chosen have high ecological validities in achieving the goal state.
So I stand by my assertion that the lens model is a useful framework to use in understanding where simple rules might fit in the overall scheme of things. There can be simple rules for handling decision making related to the input side of things and there can be simple rules for selecting the means to achieve some goal object. We may appear to be in Plato's cave observing shadows on the wall and trying to figure out the objects that they represent, however, Brunswick's model offers the hope that we can get closer to the opening of the cave to behold the object in an ever clearer light. The distal or goal object does not live in a world of ideal forms, but rather is an object that we can focus on more or less clearly depending upon the cues and means we choose and how strongly correlated they are with the distal or goal object.
The lens model has something useful to offer investors who must make investment decisions based upon multiple unreliable cues, and for entrepreneurs seeking a goal state and needing to select from multiple means that are more or less correlated with the goal state. We are searching for best cues and the best means and these can often be formulated as simple rules.
Posted on April 4, 2016 @ 08:00:00 AM by Paul Meagher
What is the lens model?
The lens model was developed by psychologist Egon Brunswick between 1930 and 1950. He did some research in perceptual psychology and, in particular, did some research on depth perception. A big problem in depth perception is that you have a 3 dimensional world and a 2 dimensional retina that the light from the world impinges upon. How are you able to reconstruct a three dimensional world from this limited two dimensional information?
It turns out that there are are a large number of cues for determining depth that we can glean from 2 dimensional imagery. There are cues such a parallax, stereopsis, occlusion, linear perspective, texture gradients and so on. There are even more cues if we incorporate observer and world motion into the mix.
Egon observed that each cue, under certain circumstances, can provide misleading information about depth (see Ames Room). He also suggested that the importance we assigned to a cue should depend how reliably the cue signals information about depth. The mental leap that Egon took was to say that what is true of perception is true more generally in psychology, namely, that there are often multiple cues that might indicate, for example, what a person's psychiatric diagnosis should be and that we should only put our faith in those cues that have high ecological validities (i.e., are reliably correlated with the criterion we are trying to determine).
Egon proposed the Lens Model as a foundational model that psychology could use for research design and model building. The basic idea is that the real state of the world (the distal stimulus or the criterion to be judged) on the left hand side emits multiple and sometimes redundant cues about the state of world (think depth perception cues). On the right hand side we have the observer who assimilates this cue information to arrive at a decision about the state of the world. The observer never sees the world directly, instead they view the world through a lens. That lens consists of multiple cues that we take to be a proxy for some state of the world (e.g., depth relations among objects).
When we rely upon a cue (e.g., arrival of geese) to inform us about some state of the world (e.g,. whether spring has arrived) we can assign that cue a weight. There are often multiple cues providing us with more or less reliable information about some state of the world and Egon believed that we intuitively assign weights to these various cues, sum the weighted cues, and then infer whether some state of the world is true or not depending on whether some decision threshold is met or not. Our depth perception system would appear to perform such calculations automatically but we can also perform such calculations in other areas in a more controlled way using the lens model.
The final aspect of this model that is worth noting is that on the left hand side we can do research to establish how reliably correlated a given cue (e.g., sleep length) is to some state of the world (e.g., patient has depression) to determine the ecological validity of the cue. On the right had side, we might also use the cue (e.g., sleep length) to arrive at a diagnosis of depression but we might assign it an incorrect weight. We might also be using a cue that is not reliably associated with the criterion (low ecological validity) and arriving at an incorrect assessments as a result. So we need to distinguish between the ecological validities of cues on the left hand side and cue utilization validities on the right hand size (i.e, whether our psychological model is capturing the right cues and assigning them the right weights).
The reason I decided to discuss the lens model is because the Simple Rules book I have been blogging about recently didn't offer up an overall framework for thinking about how Simple Rules relate to the world. In order to use Simple Rules more effectively I would argue that you would benefit from a correspondingly Simple Model of how they relate to the world, why they work, why they don't work, and how they can be improved. I believe the Lens Model provides one such a framework. Simple rules can be understood as weighted cues we use to arrive at particular decisions or actions.
Some interesting research has been done on simple linear models of decision making (which the lens model would be an example of) where you assign a weight to each cue, multiply the measured value of the cue by the weight, sum the terms, and compare the total to some threshold in order to make your decision. For example, the graduate school admission ratings that psychologist Reid Hastie used could be modelled with this equation (from Rational Choice in an Uncertain World, 2nd Ed. 2010, by Reid Hastie & Robin Dawes, p. 49):
Admissibility Rating = 0.012 (Verbal GRE Test Score) + 0.015 (Quantitative GRE Test Score) + 0.25 (Warmth of Recommendations) + 0.410 (College Quality) + Other Factors - 13.280.
Notice that some variables don't have much weight and don't affect the rating much so could be removed. The most heavily weighted cues are "warmth of recommendations" and the "college quality" of the applicants plus, possibly, "other factors" that have non-trivial weights. Using this simple model of graduate admission ratings, Reid Hastie could replace a more complicated screening process with a much simpler screening process and arrive at roughly the same rating. This is one way to arrive at simple rules.
The purpose of today's blog was to talk about the lens model so that I can refer to it in any future blogs I want to. The second reason why I discussed the lens model is because a deficiency of the simple rules book from my perspective is that it didn't offer a simple graphical framework we might use to formally or graphically understand why simple rules work, how the rules can be combined, how they can go wrong, and what can be done to improve them. I believe the lens model provides some general guidance on how to properly think about and use simple rules.
If simple rules work then simple linear models can also be argued to work. Simple linear models have the advantage over more complicated structural models that we can do mental arithmetic with them because they only involve simple additions and multiplications. We can also simplify the weighting scheme so we only use weights that are easy to mentally work with (e.g., -1,0, 1/4, 1/3, 1/2, 2/3, 3/4, 1). If we are bounded in our computational abilities, working memory, and so on then we must find techniques that are sufficiently simple that we stand a chance of using them in the real world. Perhaps the lens model, in its simplest interpretation, is a good starting point.
Here is a lens model handout that you might find useful for exploring the lens model and simple linear modelling further.
Notice: The Texas Investment Network is owned by
Dealfow Solutions Ltd. The Texas Investment Network is part
of a network of sites, the Dealflow Investment Network, that provides a platform
for startups and existing businesses to connect with a combined pool of potential
funders. Dealflow Solutions Ltd. is not a registered broker or dealer and
does not offer investment advice or advice on the raising of capital. The
Texas Investment Network does not provide direct funding or make any
recommendations or suggestions to an investor to invest in a particular company.
It does not take part in the negotiations or execution of any transaction or deal.
The Texas Investment Network does not purchase, sell, negotiate,
execute, take possession or is compensated by securities in any way, or at any time,
nor is it permitted through our platform. We are not an equity crowdfunding platform
or portal. Entrepreneurs and Accredited Investors who wish to use the Texas Investment Network
are hereby warned that engaging in private fundraising and funding activities can expose you to
a high risk of fraud, monetary loss, and regulatory scrutiny and to proceed with caution
and professional guidance at all times.