Ben Gill's article on the Technology Foresight Programme (THES March 24) gives an inaccurate and misleading impression of the process and of the tools which were used to support the panels.
Taking first his comments on the co-nomination procedure which was used to help identify panellists and those subsequently to be consulted. This was introduced because early consultation indicated a widespread feeling that Foresight should cast its net well beyond those normally serving on Government advisory committees. Furthermore, in other countries identification of relevant experts to consult had proved difficult and time-consuming. Hence during the preparatory "Pre-Foresight" stage co-nomination was used to allow experts to identify further experts and to describe their own expertise. This was not a "write-in" process but a carefully controlled survey.
Almost 1,400 people returned forms, yielding a total of just under 6,000 names. It was never intended that this would be the sole means of identifying participants, though more than half came from this route. Members of the 15 panels were appointed by the steering group which tried to ensure main interests were covered.
Remarks about the Delphi survey are misleading. Panels were heavily engaged in consideration of scenarios and major issues such as sustainability and sources of competitiveness; for most this formed the core of their activity and they had a free hand to pursue their agendas. However, they were also obliged to consult widely within the relevant communities. This was done in four ways: an initial survey in which 50 or so selected experts were asked to identify key trends, markets and technologies; a large-scale Delphi survey; regional workshops in which the Delphi issues were discussed; and direct contact with relevant organisations and individuals. The survey was not intended to "determine the nation's research priorities for the next two decades" but rather to provide a tool which allowed panels to consult large numbers of experts on topics identified as important.
中国A片
Delphi is a well-established approach in these circumstances with both its strengths and limitations being well understood. In the realm of Technology Foresight it has been extensively applied by the Japanese, where the Science and Technology Agency has carried out five such exercises since 1971 and continues to do so. A poll of participating firms discovered that the majority found the exercise useful as an input to longer term planning for research and development and business projects.
As far as the process itself goes, Delphi topics have nothing to do with "extrapolation of how events have unfolded over the past few decades" as Mr Gill describes them. They are statements of possible levels of achievement or new developments in technological, market and social terms upon which experts are invited to give their views. The survey was not sent to "a broad cross-section of society" but to experts selected by the panels.
中国A片
Overall 2,585 forms were returned by the deadline in the postal Delphi, including 224 for Mr Gill's agriculture and natural resources (ANRE) panel. The mean number of responses per topic is not 30-40 as he states but 57 for the higher categories of expertise overall. For the ANRE panel the mean topic response was 133, of whom 42 were in the higher categories of expertise. One reason this was lower than average was that this panel covered an extremely diverse range of subjects. In all cases there were more experts responding than the number of people on the panel with relevant knowledge (the ANRE panel had 21 members).
It is wrong to say that the views of those rating their expertise as "unfamiliar with the topic" or "casually acquainted" were disregarded. The presentation of the results includes all respondents' views on one line, followed in each case on the next line by the views of those rating their expertise on the topic in question as "familiar", "knowledgeable" or "expert".
It is not unreasonable to attach more credence to experts' views. While it is never easy to rate one's own expertise, respondents were aided by detailed definitions of these terms. Mr Gill's other remarks on this topic suggests that he believes the ratings for each topic were somehow consolidated. They were not. On two points I agree with Mr Gill. The time schedule was demanding. However, loss of thinking time should be offset against maintaining the necessary momentum for the follow-up activities he rightly supports.
To conclude, it is worth considering the alternatives to systematic Foresight approaches. The established "method" has been succinctly described by the American acronym BOGSAT (Bunch Of Guys Sat Around A Table). Numerous exercises of this type have been carried out and few, if any, have had any impact.While this approach is satisfying if you have a seat at the table it fails to carry the consensus and commitment necessary for the results to be implemented. Mr Gill should not be so hasty in dismissing the views of the large number of people who put in a substantial and worthwhile effort to assist the Foresight Programme.
中国A片
Luke Georghiou is director of the Programme of Policy Research in Engineering, Science and Technology and led the team which carried out the support exercises for the Technology Foresight Programme. He would like to make it clear that the opinions expressed are his personal ones.
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to 罢贬贰’蝉 university and college rankings analysis
Already registered or a current subscriber? Login