Recent critiques of scientific evaluation: new wine into old wineskins?
Keywords:
SCIENTIFIC EVALUATION, PEER REVIEW, BIBLIOMETRICS, ACADEMIC RESEARCHAbstract
This article examines recent critiques of scientific evaluation exposed in seven documents-manifestos published between 2010 and 2016. The goals are twofold: a) to examine and assess the critiques of (academic) science evaluation contained in the aforementioned manifestos and b) to compare these critiques with those made in previous decades, exploring the identities, divergences, old and emergent issues. The texts were selected taking into consideration their wide dissemination and the discussion they prompted. The study was carried out through content analysis of the texts and their contextualization through a literature review. We concluded that most of the critiques to science evaluation depicted in the manifestos are not new; however, there is a clear displacement of the center of the discussion from peer review towards quantitative bibliometric assessment. It is also evident that some critiques, particularly those calling attention to the degradation of scientific quality and relevance, became stronger over the past decade, following the increasing scale and efficiency of the application of quantitative evaluation methods, making more evident their effects on knowledge production and the academic activity.
References
Alvesson, M. y A. Spicer (2016), “(Un)Conditional surrender? Why do professionals willingly comply with managerialism”, Journal of Organizational Change Management, vol. 29, N° 1, pp. 29-45.
American Society for Cell Biology et al. (2012), “San Francisco Declaration On Research Assessment (DORA)”. Disponible en: <https://sfdora.org/read/> [En castellano: “Declaración de San Francisco sobre la Evaluación de la Investigación”. Disponible en: <https://sfdora.org/read/es/>, reproducido en este dossier].
Arvanitis, R. e Y. Chatelin (1988), “National strategies in tropical soil sciences”, Social Studies of Science, vol. 18, N° 1, pp. 113-146.
Avalos, I. (1997), “El CONICIT: casa de pares e impares (o cómo no hay ideas equivocadas, sino extemporáneas)”, en Sutz, J. (ed.), Innovación y desarrollo en América Latina, Caracas, CLACSO/AECI/Nueva Sociedad, pp. 151-162.
Butler, L. (2007), “Assessing university research: a plea for a balanced approach”, Science and Public Policy, vol. 34, N° 8, pp. 565–574.
Chubin, D. y E. Hackett (1990), Peerless science: peer review and US science policy, Albany, State University of New York Press.
Chubin, D. y T. Connolly (1982), “Research trails and science policies: local and extra-local negotiation of scientific work”, en Elias, N., H. Martins y R. Whitley (orgs.), Scientific establishments and hierarchies: sociology of the sciences. Vol. VI, Albany, Reidel Publishing Company.
Cozzens, S. et al. (orgs.) (1990), The research system in transition, Dordrecht, Kluwer Academic Publishers.
Davyt, A., y L. Velho (2000), “A avaliação da ciência e a revisão por pares: passado e presente. Como será o futuro?”, História, Ciências, Saúde-Manguinhos, vol. 7, N° 1, pp. 93-116.
De Bellis, N. (2014), “History and evolution of (biblio)metrics”, en Cronin, B. y C. Sugimoto (eds.), Beyond bibliometrics: harnessing multidimensional indicators of scholarly impact, Cambridge, The MIT Press, pp. 23-44.
Dickson, D. (1988), The new politics of science, Chicago, University of Chicago Press.
Donovan, C. (2007), “Introduction: Future pathways for science policy and research assessment: metrics vs peer review, quality vs impact”, Science and Public Policy, vol. 34, N° 8, pp. 538-542.
Etzkowitz, H. (1990), “The second academic revolution: the role of the research university in economic development”, en Cozzens, S. et al. (orgs.), The research system in transition, Dordrecht, Kluwer Academic Publishers, pp. 109-124.
Fabbri, P. y B. Latour (1995), “La retórica de la ciencia: poder y deber en un artículo de ciencia exacta”, en Fabbri, P., Tácticas de los signos, Barcelona, Gedisa, pp. 265-290.
Frame, J. D. (1980), “Measuring scientific activity in lesser developed countries”. Scientometrics, vol. 2, N° 2, pp. 133-145.
Frame, J. D. (1985), “Problems in the use of literature-based S&T indicators in developing countries”, en Morita-Lou, H. (org.), Science and technology indicators for development, Boulder, Westview Press Inc, pp. 117-121.
Gibbons, M. et al. (1994), The new production of knowledge: the dynamics of science and research in contemporary societies. Londres, Sage. [En castellano: Gibbons, M. et al. (1997), La Nueva producción del conocimiento: la dinámica de la ciencia y la investigación en las sociedades contemporáneas, Barcelona, Pomares-Corredor].
Gilbert, N. (1978), “Measuring the growth of science: a review of indicators of scientific growth”, Scientometrics, vol. 1, N° 1, pp. 9-34.
Gläser, J., y G. Laudel (2007), “The social construction of bibliometric evaluation”, en Whitley, R. y J. Gläser (eds.), The changing governance of the sciences. The Advent of Research Evaluation Systems, Dordrecht, Springer, pp. 101-123.
Goldreich, O. (2015), “Content-Oblivious Quality Measures and the Control of Academia”, Department of Computer Science, Weizmann Institute of Science. Disponible en: <http://www.wisdom.weizmann.ac.il/~oded/F/measures-en.pdf>
Gosselain, O. (2012), “Slow Science et Désexcellence: Quelques Poches de Résistance En Belgique, Politique Des Sciences”, Seminario Politique des Sciences, Paris, EHESS. Disponible en. <https://pds.hypotheses.org/1968>
Halffman, W. y H. Radder (2015), “The Academic Manifesto: From an Occupied to a Public University”, Minerva, vol. 53, N° 2, pp. 165-187. [En castellano: Halffman, W. y H. Radder (2017), “El manifesto académico. De la universidad ocupada a la universidad libre”, CIC. Cuadernos de Información y Comunicación, vol. 22, pp. 259-281, reproducido en este dossier].
Harnad, S. (1998), “Web matters: the invisible hand of peer review”, Nature – Web matters, 5 de noviembre. Disponible en: <https://www.nature.com/articles/nature28029>
Hicks, D. et al. (2015), "The Leiden Manifesto for research metrics", Nature, vol. 520, N° 7548, pp. 429-431. [En castellano: “El Manifiesto de Leiden sobre indicadores de investigación” Disponible en: <http://www.leidenmanifesto.org/uploads/4/1/6/0/41603901/manifiesto_cast.pdf>, reproducido en este dossier].
Hills, P. y A. Dale (1995), “Research and technology evaluation in the United Kingdom”, Research Evaluation, vol. 5, N° 1, pp. 35-44.
Hirsch, J. (2005), “An index to quantify an individual's scientific research output”, Proceedings of the National Academy of Sciences, vol. 102, N° 46, pp. 16569-16572. Disponible en: <https://www.pnas.org/content/pnas/102/46/16569.full.pdf>
Holbrook, J. (1992), “Why measure science?”, Science and Public Policy, vol. 19, N° 5, pp. 262-266.
Jagodzinski-Sigogneau, M.; J-P. Courtial y B. Latour (1982), “How to measure the degree of independence of a research system?”, Scientometrics, vol. 4, N° 2, pp. 119-133.
L’Atelier des Chercheurs – Universidad Libre de Bruselas (2010), “Charte de la désexcellence”, Bruselas, Universidad Libre de Bruselas. [En castellano: “Estatuto de la desexcelencia (Versión 1.1)”, publicado en este dossier].
Lindsey, D. (1978), The scientific publication system in social science, San Francisco, Jossey-Bass Publishers.
Link, A. (1998), “US and non- US submissions”, Journal of the American Medical Association, vol. 280, N° 3, pp. 246-247.
Manten, A. (1980), “Publication of scientific information is not identical with communication”, Scientometrics, vol. 2, N° 4, pp. 303-308.
Martin, B. R. (2016), “Editors’ JIF-boosting stratagems – Which are appropriate and which not?”, Research Policy, vol. 45, N° 1, pp. 1-7.
Martin, B. R. y J. Irvine (1983), “Assessing basic research: Some partial indicators of scientific progress in radio astronomy”, Research Policy, vol. 12, N° 2, pp. 61-90.
Merton, R. K. (1968), “The Matthew Effect in Science”, Science, vol. 159, N° 3810, pp. 56-63. [En castellano: “El efecto Mateo en la ciencia”, en Merton, R. K. (1977), La sociología de la ciencia, Madrid, Alianza, pp. 554-578].
Merton, R. K. (1973), “’Recognition’ and ‘Excellence’: instructive ambiguities”, en Merton, R. K., The sociology of science: theoretical and empirical investigations, Chicago, University of Chicago Press, pp. 419-438. [En castellano: “’Reconocimiento’ y ‘ Excelencia’. Ambigüedades instructivas”, en Merton, R. K. (1977), La sociología de la ciencia, Madrid, Alianza, pp. 531-553].
Mitroff, I. y D. Chubin (1979), “Peer review at the NSF: a dialectical policy analysis”, Social Studies of Science, vol. 9, N° 2, pp. 199-232.
Narin, F. y M. Carpenter (1975), “National publication and citation comparisons”. Journal of the American Society for Information Science, vol. 26, N° 2, p. 80-93.
Price, D. J. de Solla (1978), Editorial statements, Scientometrics, vol. 1, N° 1, pp. 3-8.
Price, D. J. de Solla (1986a), “Measuring the size of science”, en Price, D. J. de Solla, Little Science, Big Science ...and beyond, Nueva York, Columbia University Press, pp. 135-154.
Price, D. J. de Solla (1986b), “Citations measures of hard science, soft science, technology and nonscience”, en Price, D. J. de Solla, Little Science, Big Science ...and beyond. Nueva York, Columbia University Press, pp. 155-179.
Rabkin, Y. M. y H. Inhaber (1979), “Science on the periphery: a citation study of three less developed countries”, Scientometrics, vol. 1, N° 3, pp. 261-274.
Roy, R. (1984), “Alternatives to review by peers: a contribution to the theory of scientific choice”, Minerva, 22, N° 3-4, pp. 316-328.
Salomon, J-J. (1996), “La ciencia y la tecnología modernas”, en Salomon, J-J., F. Sagasti y C. Sachs (comps.), Una búsqueda incierta: Ciencia, tecnología y desarrollo, México D. F., Editora de Universidad de las Naciones Unidas/Fondo de Cultura Económica, pp. 49-86.
Sarewitz, D. (2016), “Saving Science”, The New Atlantis, vol. 49, pp. 5-40. [En castellano: Sarewitz, D. (2017), “Salvar la ciencia”, Revista de Economía Institucional, vol. 19, N° 37, pp. 31-65, reproducido en este dossier].
Shapin, S. (1996), The scientific revolution. Chicago, University of Chicago Press. [En castellano: Shapin, S. (2000) La revolución científica. Una interpretación alternativa, Barcelona/Buenos Aires, Editorial Paidós].
STEPS Centre – University of Sussex (2010), “Innovation, Sustainability, Development: a New Manifesto”, Brighton, Universidad de Sussex. [En castellano: “Innovación, sustentabilidad y desarrollo. Un Nuevo Manifiesto”. Disponible en: <https://steps-centre.org/wp-content/uploads/manifesto-laspanish.pdf>, reproducido en este dossier].
The Slow Science Academy (2010), “The Slow Science Manifesto”. Berlin, The Slow Science Academy. Disponible en: [En castellano: “El Manifesto de la ciencia lenta”, publicado en este dossier].
Travis, G. y H. Collins (1991), “New light on old boys: cognitive and institutional particularism in the peer review system”, Science, Technology and Human Values, vol. 16, N° 3, pp. 322-341.
Unesco (2000), Declaración sobre la Ciencia y el Uso del Saber Científico. La ciencia para el siglo XXI; Un nuevo compromiso, París, Unesco.
Unesco (2015), Unesco Science Report: towards 2030, París, Unesco.
Van den Beemt, F. C. H. D. y A. F. J. van Raan (1995), “Evaluating research proposals”, Nature, vol. 375, N° 6529, p. 272.
Velho, L. (1989), “Avaliação acadêmica: a hora e a vez do baixo clero”, Ciência e Cultura, vol. 41, N° 10, pp. 957-968.
Weinberg, A. M. (1963), “Criteria for scientific choice”, Minerva, vol. I, N° 2, pp. 159-171.
Weinberg, A. M. (1964), “Criteria for scientific choice II: the two cultures”, Minerva, vol. II, N° 1, pp. 3-14.
Weingart, P. (2005), “Impact of Bibliometrics Upon the Science System: Inadvertent Consequences?”, Scientometrics, vol. 62, N° 1, pp. 117-131.
Wenneras, C. y A. Wold (1997), “Nepotism and sexism in peer-review”, Nature, vol. 387, N° 6631, pp. 341-343.
Wilsdon, J. et al. (2015), The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management, Stoke Gifford, HEFCE.
Wouters, P. et al. (2015), The Metric Tide: Literature Review. (Supplementary Report I to the Independent Review of the Role of Metrics in Research Assessment and Management), Stoke Gifford, HEFCE.
Ziman, J. (1994), Prometheus bound: science in a dynamic steady state, Cambridge, Cambridge University Press.
Zuckerman, H. y R. K. Merton (1973), “Institutionalized Patterns of Evaluation in Science”, en Merton, R. K., The sociology of science: theoretical and empirical investigations, Chicago, University of Chicago Press, pp. 460-496. [En castellano: “Pautas institucionalizadas de evaluación en la ciencia”, en Merton, R. K. (1977), La sociología de la ciencia, Madrid, Alianza, pp. 579-621].
Published
How to Cite
Issue
Section
License
Copyright (c) 2020 Redes. Journal of Social Studies of Science and TechnologyThe documents published here are governed by the licensing criteria
Creative Commons Argentina.Atribución - No Comercial - Sin Obra Derivada 2.5 https://creativecommons.org/licenses/by-nc-nd/2.5/ar/