In a previous post, I highlighted the power of focusing on the how of detailing out what is happening in the now. For me, this was the most exciting thing when I first discovered NLP Modeling. By asking questions and by closely observing people, a person could identify how any given person is currently, at this moment, creating his or her sense of reality. And if we can do that, then we can figure out how that reality came into existence, operates, and can be altered. Incredible!
Now in NLP Modeling, Wyatt Woodsmall (1990) was the person who first differentiated two dimensions or levels of modeling. He labeled them Modeling I and Modeling II. I think that this distinction provides a valuable way to think about the range of the modeling that we can do.
Modeling I refers to pattern detection and transference. This kind of modeling detects a pattern of behavior that shows up in certain skills, abilities, and expertise. By explicating the patterns of behavior in the skill or skills—the what that an expert actually does to achieve a result, this modeling focuses on reproducing the products of the expert. This kind of modeling focuses on learning the sets of distinctions, procedures, and processes which enable a person to reach a desired outcome.
Modeling II refers to modeling the first modeling (Modeling I). As such, it focuses on the how of an expert— how does the expert actually create and perform the expertise. It doesn’t focus on the what is produced (that’s the first modeling), it focuses on the background competencies. Now we focus on the processes which are necessary to generate the patterns that form the content of Modeling I. In this modeling, we especially pay attention to the beliefs and values that outframe the expert. Here we attend to the meta-programs, the contexts and frames, the meta-states, etc., all of the higher frames.
I like this distinction because, as Woodsmall points out, the field of NLP itself resulted from Modeling I, but not Modeling II. Let me explain. NLP emerged from the joint venture of John Grinder and Richard Bandler as they studied the language patterns of Fritz Perls and Virginia Satir. First Richard used his gift of mimicking Perls’ and Satir’s speech, tonal, and language patterns. Though untrained in psychology and psychotherapy, by simply reproducing the “magical” effects of these communication experts, he found that he could get many of the same results as the experts. Incredible! How was this possible?
In searching for that answer, John used Transformational Grammar and his unique skills in that field to pull apart the “surface” structures for the purpose of identifying the “deep” structures. Both of them wanted to discover how this worked. Frank Pucelik also was a part of all of that, and he created the context and the original group in which all of the discoveries took place.
From the theory of Transformational Grammar, the assumptions of the Cognitive Psychology (Noam Chomsky, George Miller, George Kelly, Alfred Korzybski, Gregory Bateson), and the coping of Perls and Satir, they specified what “the therapeutic wizards” actually did which had the transformative effect upon clients. That was the original NLP modeling.
This adventure in modeling then gave birth to “The Structure of Magic” (1975/ 1976) which gave us the first NLP Model. This was originally called The Meta-Model of Language in Therapy. Today we just call it, The Meta-Model. This is a model about the language behavior of Perls and Satir, that is, how they used words in doing change work with clients. And that then became the central technology of NLP for modeling.
The amazing thing is that with that first model, they were able to model a great deal of the governing structure of a person’s experience. That enabled them to peek into a person’s model of the world just by listening to the features that linguistically mark out how the person has created his or her map. While this is not all that’s needed for modeling, it certainly gives us a set of linguistic tools for figuring out how a piece of subjective experience works. It answers the how questions:
- How does a person depress himself?
- How does a person take “criticism” effective and use it for learning?
- How does another person look out at an audience and freak out?
The Meta-Model gave the original co-developers of NLP numerous tools for both understanding and replicating the person’s original modeling. Soon thereafter, as they modeled Erickson, they began adding all kinds of non-verbal and non-linguistic distinctions to their model, enriching the modeling process even further. As NLP started with Modeling I and not Modeling II, the early NLP thinkers and trainers did not have access to the higher level of modeling until some time later. Nor did they seem aware of it for some time. Eventually this realization arose as people began asking some basic modeling questions:
- What strategy did Perls use in working with clients?
- What strategy enabled Satir to do her “magic” with families?
- What strategy describes Erickson’s calibration skills and use of hypnotic language patterns?
- How did any one of those wizards make decisions about what to use when?
Even to this day, we do not know. We know what they produced, but not how they produced such. We have the results from their magic, but not the formula that identifies the states and meta-states, the beliefs and higher frames of mind that enabled them to operate as “wizards” in the first place. Woodsmall (1990) writes:
“In short, if NLP is the by-product of modeling Erickson, Perls, and Satir, then why are we never taught how they did anything? All we are taught is what they did. This means that we can imitate the powerful patterns that they used, but we don’t know how they generated and performed them to start with. From this it is evident that the part of NLP that is the by-product of modeling is a by-product of Modeling I, but not of Modeling II.” (p. 3)
As the product of Modeling I, all that we originally received in NLP was the result of modeling. We received the patterns and procedures which the modelers found in Perls, Satir, and Erickson, i.e., reframing, swishing, anchoring, collapsing anchors, etc. We received the NLP patterns. Bandler and Grinder gave us a legacy of dramatic processes that enable people to change.
Only later was it that Bandler, Grinder, DeLozier, Bandler-Cameron, Dilts, and Gordon begin to wonder about the modeling itself that they started to explore the modeling processes, assumptions, patterns, etc. about modeling. From that came the commission from Richard and John for Robert Dilts to write the second modeling book, “NLP: Volume I”. That volume made Modeling II available.
They also left their theory about change, mind, neurology, language, etc. Of course, they did not call it “a theory.” In fact, they pulled off a big “Sleight of Mouth” pattern as they told us that they had no theory, just a description of what worked. “It’s a model, not a theory.” With that mind-line, they distracted our attention and offered “the NLP Presuppositions,” telling us that they were not true, could not be proven, but seemed like really nice “lies” that would take us to more resourceful places. So we just memorized them, only half aware (if that), that within the NLP Presuppositions they had hidden away the theory of neuro-linguistic programming.