Most of us are confronted with experts in our lives and work, especially those who toil in education. Further, especially if we work in teams, we have to depend on given experts to ensure that the project we’re involved with succeeds. If they don’t have the expertise they claim, our project can flounder, to all our cost. So, how do we know whether or not they as as good as they seem?
Some years ago I wrote a brief (as demanded by the organisers) conference paper* that addressed this problem, and since it was largely unread, I’ve decided to resurrect it as a blog post. It’s addressed to instructional designers, but applies to any situation in which we are faced with an expert. So, without further ado, and with added pics (yes, James G. March is still going strong at 84), here we go with ‘The Instructional Designer Meets the Expert’:
As we all know, instructional designers are experts. The nature of their craft demands that they frequently work with other experts from a wide variety of backgrounds. In distance education, the Instructional Designer (ID) is often required to collaborate with subject matter experts in developing the content of learning materials. The problem that confronts the ID in such a situation is the extent to which the word of the expert can be accepted (and vice versa?). In other words, how expert is the expert?
The analysis of expertise
The nature of expertise has been a subject of investigation in recent years with experiments usually comparing the performance of experts and novices. Even the hallowed ground of pedagogy has come under scrutiny, with recent studies on expertise in teaching (Carter et al., 1987). The issue facing the ID is a little different. Rather than having to locate an expert, the expert is usually “provided”, having been identified by someone else as the person for the job, or having volunteered. So, faced with experts, the ID has to assess just how capable they are, and how much their expertise can be relied upon.
In some cases this is easy. The ID working with Jack Nicklaus on his video of golfing tips had, I’m sure, no doubt about the abilities of that expert. So, in a training situation, one usually seeks an expert in the form of a “best performer”. Good advice has recently been given on how to identify such people, and how to make the best use of them (Spitzer, 1987). But what if, as is more usual in distance education, you are an ID working with experts in a cognitive area, where the degree of expertise can be rather more difficult to assess? You may be presented with subject matter experts who are expected to assist with the content of the learning materials that you are developing, and yet be unaware of how good they are. How much can you rely on their ideas and advice? Further, should two experts disagree over a matter of some complexity, how do you resolve it?
Often the answer becomes a matter of the ID’s own personal opinion and judgement. In fact, it could reasonably be assumed that an experienced ID should be quite good at judging expertise. However, it would be better to have some guidelines on how best to make such a judgement, rather than rely purely on personal prejudices or “gut feelings” . The guidelines that we shall consider take the form of assumptions that can be made about the nature of expert knowledge, as well as possible applications of these assumptions.
A few years ago, James March gave educational administrators some sound advice on the analysis of expertise (March, 1974). In essence, he suggested starting in the areas of evidence and testimony in the law, using the techniques of interrogation and confrontation from those areas to assist in the process of analysis. Despite the rather negative connotations of the two words, an examination of them reveals how helpful they can be, if used in the right way. Interrogation is the skill of asking questions that solicit information on two critical issues:
What can the expert say that is relevant to the problem?
What degree of confidence can be placed in what is said?
Note that these are joint problems shared by the ID and the expert. Both are uncertain about what information might be relevant and what degree of confidence to place in the statements that are made. Interrogation is a skill for which training is available, and while the rules of interrogation that are taught in legal training obviously are not immediately transferable to the work of IDs they may nevertheless provide a working base.
Confrontation involves observation. For example, we can confront one expert with another and observe the interaction, pinpointing areas of agreement and disagreement. Principally, though confrontation proceeds from an assumption that the quality of answers can be assessed if the right questions are asked. Thus a part of our task is to improve the quality of our questioning.
Assumptions about expert knowledge
We now come to the crux of the matter. Both interrogation and confrontation require inferences to be made, based on certain analytical assumptions about expert knowledge. It is these assumptions, developed by March, in which we are interested.
They are listed below, together with suggestions on how to apply them to the work of an ID.
|(i) The Homogeneity Assumption. We can attempt to assess the competence of experts by sampling from their knowledge. We can check knowledge in areas shared in common. If experts know what they are talking about in these areas, we assume that they know what they are taking about elsewhere.||If you are working with an expert in business statistics and you have a background in mathematics, check the expert’s knowledge of a mathematical area in common.|
|(ii) The Density-precision Assumption. We may not be able to tell whether an expert’s knowledge is correct, but we can assess the density of that knowledge. If our expert can recite facts, studies, theories, documents, reports and observations, the precision of that knowledge seems greater than if the expert reports only on a sparsely occupied memory. If conclusions are reported without a richness of detail in support, the expertise can be doubted.||Question experts on their area of expertise. Focus on one area, and persist with that topic. Questions should be increasingly specific, until they narrow down to single concepts, and how these might best be presented and learned.|
|(iii) The Independence of Errors Assumption. Expertise can be assessed by comparing the expert’s observations with those of other experts. It is assumed that if several experts say the same thing, it is probably true. An expert who usually agrees with other experts is likely to be more reliable that one who disagrees with others.||Check some of the expert’s observations with a suitable up-to-date text. Further, a discussion with another expert, during which you can recite the observations of the first expert (without revealing their origin), may expose any disagreements.|
|(iv) The Distribution of Confidence Assumption In any body of knowledge, there is variation in the degree of confidence with which different beliefs are held. If an expert does not exhibit such variation, it is assumed that he is not as well informed as those who do.||Such questions as “How wide an application does this concept have?”, or “How well developed is this theory?” should reveal any variation.|
|(v) The Interconnectedness Assumption. Knowledge tends to exist in interconnected networks, rather than as lists or discrete “chunks”. The assumption is that experts who describe knowledge is terms of a series of independent bits are either experts in a domain in which the knowledge is less reliable than others, or are themselves less reliable experts.||Invite the experts to explain the interconnections and interdependence of areas within the body of knowledge. Suitable frameworks for such explanation can be offered to the expert, in the form of devices such as concept maps and pattern notes.|
Naturally, the assumptions are general tendencies, rather than universal rules. You have probably already thought of exceptions to each of them. As March admits, “Knowledge is not necessarily homogeneous; density of knowledge is not necessarily correlated with precision; the errors of experts may not be independent at all; variations in confidence across experts may be quite meaningful; knowledge may be quite unconnected.” (March, 1974, p. 31)
So, a contingency perspective is required – each situation needs to be assessed carefully to see how the assumptions might be applied.
The assumptions have been presented to give IDs a possible basis for their assessment of the capabilities of experts. Each is a way of looking at expert knowledge that can stimulate the formulation of relevant questions in such an assessment. In other words, their value is to provide ideas for the formation of questions: IDs should always bear in mind that, for have the right questions posed.
Carter, K., Sabers, D., Cushing, K., Pinnegar, S. and Berlinger, D.C. (1987). Processing and using information about students: a Study of expert, novice and postulant teachers. Teaching and Teacher Education, Col. 3, No.2, pp. 147-157.
March, James G. (1974). Analytical skills and the university training of educational administrators. The Journal of Educational Administration Vol. 12, No.1, pp. 17-44.
Spitzer, Dean R. (1986). The best performer: sharing the secrets of success. Performance and Instruction, Vol25, No. 10, pp. 30-32.
* Murphy, D. (1988) The instructional designer meets the expert. Developing Distance Education. Sewart, D. and Daniel, J. (Eds.). International Council for Distance Education, Oslo, 322-324.