Higher Education (2020) 79:829–846https://doi.org/10.1007/s10734-019-00440-1Ratings, rankings, research evaluation: how do Schoolsof Education behave strategically within stratified UKhigher education?Marcelo Marques1 & Justin J. W. Powell1Published online: 16 August 2019# The Author(s) 2019, corrected publication 2019AbstractWhile higher education research has paid considerable attention to the impact of bothratings and rankings on universities, less attention has been devoted to how universitysubunits, such as Schools of Education, are affected by such performance measurements.Anchored in a neo-institutional approach, we analyze the formation of a competitiveinstitutional environment in UK higher education in which ratings and rankings assume acentral position in promoting competition among Schools of Education (SoE). We applythe concepts of “institutional environment” and “organizational strategic actors” to theSoE to demonstrate how such university subunits articulate their qualities and respond tothe institutional environment in which they are embedded—by using ratings and rankings(R&R) to compete for material and symbolical resources as well as inter-organizationaland intra-organizational legitimacy. Through findings from 22 in-depth expert interviewswith members of the multidisciplinary field of education and a content analysis ofwebsites (n = 75) of SoE that participated in REF 2014, we examine the stratifiedenvironment in which SoE are embedded (1). We uncover how R&R are applied bySoE within this competitive, marketized higher education system (2). Finally, we indicatethe strategic behaviors that have been triggered by the rise of R&R in a country with ahighly formalized and standardized research evaluation system (3). The results show bothhomogenization and differentiation among SoE in their use of organizational vocabularyand the applications of R&R while simultaneously revealing strategic behavior, rangingfrom changes in internal practices to changes in organizational structures.Keywords Researchevaluation .Educational research .Ratings .Rankings .Competition .Highereducation . UK* Marcelo Marquesmarcelo.marques@uni.lu1 Institute of Education and Society, University of Luxembourg, Maison des Sciences Humaines, 11Porte des Sciences, L-4366 Esch-sur-Alzette, Luxembourg830 Higher Education (2020) 79:829–846IntroductionRatings and rankings (R&R) have assumed considerable importance in the (global) gover-nance of education, especially of universities. Ratings related to the quality of research andteaching are increasingly used by national governments to allocate public funding where peerreviewers find “quality” and “excellence.” Similarly, (inter)national rankings are, more thanever, used to construct a sense of “scarcity of reputation” on a global scale (Brankovic et al.2018), leading universities to invest considerably in their brands, driven by images ofdistinction to attract material and symbolic resources and talented individuals (Drori et al.2016). Much of the literature has explored the impact of rankings on the policymaking level,on university strategic action, or on students’ choices (see Dill and Soo 2005; Clarke 2007;Bowman and Bastedo 2009; Hazelkorn 2011; Collins and Park 2016). Far less attention hasbeen devoted to understanding consequences for disciplines and schools or other universitysubunits. Exceptional and important accounts include studies of the effects of rankings on lawschools in the USA (Espeland and Sauder 2009, 2016) or on business schools (Wedlin 2006;Rasche et al. 2014). Rankings are often viewed as instruments that foster surveillance andnormalization (Espeland and Sauder 2009), thus changing the perceptions of legal educationdue to the internalization of forms of control and the imposition of a process of normalizationbased on comparisons of performance (Espeland and Sauder 2016). Similarly, rankings“discipline” business schools by enhancing the visibility of individuals’ performances, bydefining “normal” behavior, and by shaping how people understand themselves and the worldaround them (Rasche et al. 2014). Wedlin (2006) uncovers rankings as classification mecha-nisms that shape and structure fields and establish their boundaries. Other multidisciplinaryfields, such as education, and how their key actors strategically behave and are impacted byratings and rankings, have not been researched in-depth; thus, we pursue this here.Contrary to more established, prestigious academic fields such as law or, more recently,business studies, education has long struggled to bolster its legitimacy within the highereducation system, especially in research universities and in English-speaking countries (see,e.g. Lagemann 2000). This challenge often resulted from the incorporation of teacher educa-tion into the university. Further, its variable traditions as a multidisciplinary field that focuseson the study of educational processes and practice via numerous, sometimes conflicting,disciplinary lenses make it more challenging to grasp holistically (Lawn and Furlong 2009;McCulloch 2017). Tensions between the academic field and the field of practice (Biesta 2011)have not only questioned the scientific legitimacy of the field but also the status of education asan academic discipline within the higher education system and universities themselves(Furlong 2013). Given the rise of “ranking regimes” (Gonzales and Núñez 2014) and“performative accountability” (Oancea 2008) in higher education, we here investigate howsuch developments affected the behavior of university subunits, focusing on 75 Schools ofEducation (SoE) in the UK. The UK is a particularly insightful case because it has, since themid-1980s, developed an encompassing system of evaluation that has in turn generated muchof the power of ratings and rankings (Marques et al. 2017).We examine the construction of the institutional environment in which Schools of Educationare embedded and their evolving strategic behavior in competing for symbolic resources andlegitimacy within the organizational field. Concretely, how have third parties intensified compe-tition, linking SoE as competitors within their institutional environment, via now-ubiquitousratings and rankings?While research ratings are produced by the Research Excellence Framework(REF)—previously: Research Assessment Exercise (RAE)—under the jurisdiction of the HigherHigher Education (2020) 79:829–846 831Education Funding Councils (HEFCs), research rankings are produced by media organizations,such as Times Higher Education or The Guardian, for their own profit.Firstly, we conceptualize the conversion process of ratings into rankings as an influentialchange process that has directly and indirectly affected the institutional environment of highereducation (Meyer and Rowan 1977; Scott 1992; Meyer et al. 2007) in which Schools ofEducation are embedded. To understand the strategic behavior of university subunits, weconceptualize Schools of Education as “organizational strategic actors” (Krücken and Meier2006; Ramirez 2013; Seeber et al. 2015) that compete, not only for material resources, butespecially for symbolic resources, such as reputation and legitimacy. They do so internallywithin their specific organizational structures, vying for status in disciplinarily stratifiedorganizations, and externally within the stratified UK higher education system as a whole.Secondly, we discuss the study's data and methods. We conducted 22 expert interviews withmembers of the field of educational research in the UK. Moreover, we analyze the “organi-zational vocabulary” (Meyer and Rowan 1977) and the application of results of ratings andrankings (R&R) by the majority of UK SoE (n = 75) via their websites.Thirdly, we reconstruct the development of the highly competitive institutional environmentof SoE by charting the introduction and institutionalization of the rating system of the UK’sresearch evaluation system, and the role of media organizations in (re)framing competition anddriving its marketization. Later, we show how, in the case of SoE, R&R are used differentiallyby SoE—according to their position in the Times Higher Education Research Intensity 2014GPA rank order. We next highlight how such a competitive institutional environment isreshaped by R&R that are utilized by the SoE to bolster their inter-organizational and intra-organizational competitive advantage. R&R also trigger strategic behavior that is visible inseveral effects, ranging from changes in research management structures to the establishmentof new internal practices. Finally, we discuss the results and their implications.Institutional environment and strategic actorhood in higher education:a neo-institutional perspectiveAnchoring our endeavor theoretically in neo-institutional thinking, we conceptualize theconversion of research evaluation ratings into rankings as a crucial shift in the institutionalenvironment of contemporary higher education. In the case of UK-based Schools of Educa-tion, we investigate the rising influence and impact of R&R. As defined by Scott (1992),institutional environments are characterized “by the elaboration of rules and requirements towhich individual organizations must conform in order to receive legitimacy and support” (p.132). Following the seminal work of Power (1997) that charted the rise of an “audit society,” inwhich accountability and evaluation have become ever more ubiquitous, numerous variationsof this argument have explained change in the governance of higher education systems.Notions of “performative accountability” (Oancea 2008), “ranking regimes” (Gonzales andNúñez 2014), or “governing by numbers” (Ball 2017) all identify the worldwide transforma-tion of higher education into a more “accountable” sector, whose outputs are explicitlymeasured and evaluated through numerous forms of R&R.The concept of “organizational field” has been used in institutional theory to understand therelationship between institutional environments and organizations (Aldrich and Ruef 2006;Scott 2013). More than analyzing one individual organization in relationship to its institutionalenvironment, the concept of organizational field takes as its unit of analysis the totality of the832 Higher Education (2020) 79:829–846relevant actors (DiMaggio and Powell 1991). In this context, organizational fields can bedefined as a group of organizations—embedded in the same institutional framework (cultural-cognitive blueprints, norms, and rules and regulations)—that compete for the same resourcesand legitimacy (DiMaggio and Powell 1991; Scott 1992; Wedlin 2006; Brankovic 2018).Resources are not only material assets, such as the research funding distributed by RAE/REFratings after each round of evaluation, but also symbolic assets such as reputation or prestigewhose distribution is increasingly determined by rankings (Bastedo and Bowman 2010). But,as DiMaggio and Powell (1991) point out, every organization must take into account otherorganizations, because they not only compete for resources within their environment, but alsofor political power and for institutional legitimacy.We extend such arguments to UK Schools of Education, as subunits within the complexorganizational structures of universities, that must compete both for inter-organizational andintra-organizational legitimacy. In so doing, diverse R&R provide measures and details ofperformance and reputation at various levels. Therefore, we look at the production of ratingsby the UK higher education funding bodies and the production of rankings by mediaorganizations as the “third parties that set the framing competition” (Hasse and Krücken2013: 183). Such R&R link Schools of Education, inter-organizationally and intra-organiza-tionally, in a competition for both material and symbolic resources and legitimacy.For instance, Wedlin (2006) shows how rankings can be perceived as arenas for boundary-work in organizational fields in her examination of business school rankings. She conceptu-alizes rankings as classification mechanisms that contribute to build perceptions of whichorganizations belong to the field and which do not—uncovering how field boundaries aresubject to struggle and conflict. Thus, rankings as classification mechanisms are influential infield formation as they not only differentiate but also stratify, identifying (non) members andcreating models that business schools attempt to emulate. Indeed, in their foundational text ofsociological neo-institutionalism, Meyer and Rowan (1977) argue that institutional environ-ments lead to homogenization mirrored in the formal structures of organizations. Both thelabels in organizational charts and the organizational vocabulary used in official documents aresound indicators of homogenization within a field. Higher education scholarship has evidencedsigns of homogenization among universities (see Hüther and Krücken 2016), including in thevocabulary usually found in mission statements (Kosmützky and Krücken 2015) or welcomeaddresses (Huisman and Mampaey 2015).Nevertheless, such accounts have not only shown homogenization but also differenti-ation. In fact, the adoption of the same vocabulary (isonymism) does not necessarily meanthat organizations implement the same practices (isopraxism) or imply isomorphism instructural form (Erlingsdóttir and Lindberg 2005). Such aspects highlight the dynamicnature of competitive organizational fields (Wedlin 2006; Brankovic et al. 2018) and alsoshow how universities increasingly are considered organizational actors along with the“image of an integrated, goal-oriented entity that is deliberately choosing its own actionsand that can thus be held responsible for what it does” (Krücken and Meier 2006: 241). Ifinter-organizational stratification between universities produced by rankings has receivedconsiderable attention (Bloch et al. 2018), intra-organizational segmentation within uni-versity subunits has yet to be studied comprehensively. Recent studies have shed light onhow R&R produce differences in resources and status within universities’ subunits andtrigger strategic behavior (Cantwell and Taylor 2013; Rosinger et al. 2016). Therefore,looking at the case of UK Schools of Education, we consider strategic behavior ofuniversity subunits, which deserves further analysis.Higher Education (2020) 79:829–846 833Previous research on the institutionalization of the research evaluation system in the UKdemonstrated how the ideas of “quality,” “excellence,” and “impact” have been—gradually,but incontrovertibly—embedded in the institutional environment of UK higher education, alsoin the multidisciplinary field of education (Marques et al. 2017; Zapp et al. 2018). Schofieldet al. (2013) find that older UK universities tend to stress the international dimension and thecaliber of their staff, while the younger ones rely more on regional familiarity and studentexperience to mark their qualities. Moreover, while highly reputed universities tend to havelong enjoyed high status and thus easily build an international brand of “excellence” in adigital age, less well-reputed universities must struggle, tending to have more national, local,and intra-institutional brand orientations, even when they share the same high valuation of“excellence” that reveals global tendencies of homogenization (Mampaey and Huisman 2016),and simultaneously national organizational differentiation. We extend such arguments to UKSchools of Education, uncovering how universities’ disciplinary-based organizational subunitsembedded in a stratified institutional environment present themselves to the world. We expectto find both homogenization and differentiation as the scarce good of reputation is (re)distributed within disciplinary hierarchies in universities and within a highly stratified highereducation system.Methods and dataFor numerous reasons, the UK higher education system provides an important case study. Itsuniversities still enjoy strong institutional autonomy (Shattock 2012) within a highly compet-itive environment. Its stratification reflects a high level of marketization, exemplary for thegrowing evidence of convergence towards a market-oriented model (Dobbins and Knill 2014),and the first and strongest of the research evaluation systems that have developed acrossEurope (Marques et al. 2017; Zapp et al. 2018). Anchoring our analysis in neo-institutionaltheorizing and examining the specific case of UK Schools of Education, our analysis is guidedby two questions: (1) What influence and impact do ratings and rankings have in framing acompetitive institutional environment for UK Schools of Education? (2) How do Schools ofEducation use ratings and rankings to bolster their competitive advantage and what kinds ofstrategic behavior do such ratings and rankings trigger?We conducted 22 in-depth expert interviews with members in the field of educationalresearch. While one important interview was conducted with a member of a UK HigherEducation Funding Council, the remaining 21 interviews were conducted with academics whoassumed boundary-spanning activities within schools and universities. The selection of ourpurposive sample was then specified based upon two main criteria: (a) seniority in the field,and (b) the boundary-spanning nature of their career. The first criterion is related to the fieldmembers that were subjected to evaluation in at least two research assessment exercises, andthat could provide information about the long-lasting effects of the research evaluation systemin the Schools in which they developed or currently develop their professional activities. Thesecond criterion is related to the boundary-spanning roles that academics embody (Tushmanand Scanian 1981), leading to interviews of those not only directly impacted by the fundinginstrument but those also involved in some peer review or management position within thefield, including participation in RAE/REF panels as peer reviewers, directors of research, anddirectors of departments or schools. Because we are interested in exploring the formation ofthe competitive institutional environment in which SoE are embedded, the questions were834 Higher Education (2020) 79:829–846related to the experience of the field members along their career trajectory, often in differentunits and organizations. These interviews were transcribed and analyzed using MAXQDA,following the phases of thematic analysis according to Braun and Clarke (2006). Four majorthemes were created: institutional environment (67 codes), ratings (86 codes), rankings (23codes), and strategic behavior (75 codes). Each interview is anonymously coded as fieldmember: FM1 to FM21 for academics and FM22 for the staff member of the Higher EducationFunding Council.To complement the interviews that provided personal retrospective data, we analyzed thewebsites of those 751 UK Schools of Education that participated in REF 2014. We conductedcontent analysis of their use of REF ratings, REF-based media rankings, and all vocabularyused to describe their research and/or organizational structures (usually found in the FrontPage, Research page, and/or About Us page). For each School, we collected such informationand analyzed the data using MAXQDA. Here, four major themes were created: ratings (91codes) for explicit references to any form of rating, rankings (59 codes) for explicit referencesto any type of ranking, where (48 codes) to understand where Schools of Education declaretheir results, and organizational vocabulary (194 codes) to understand the ways that Schoolsof Education present themselves and their research to the world. To assure reliability, twomembers of our research team applied the 392 codes to our dataset. While our initial aim wasto look only for the use of REF ratings and REF-based media rankings, we soon perceived thatseveral Schools made explicit references to different forms of national ratings and national andglobal rankings, combining results to bolster their profile. Thus, we decided to include them assubcodes in the coding matrix. This initial empirical finding suggested the need to chartdifferential usage and analysis of the conversion process of ratings into rankings.As mentioned before, the UK higher education system is well known for its strong andrelatively stable prestige and status distinctions (Scott 2001). Nevertheless, recent studies havechallenged such propositions, for instance, through the study of the changing student body(Tight 2007) or the presence of a wide range of variant or alternative models of the univer-sity—not only the globally recognized Oxbridge (Tight 2009)—as the UK shifted from systemdifferentiation towards institutional diversification (Scott 2009). Boliver's (2015) results con-firm the increasingly blurred boundaries within the UK higher education system that never-theless, she argues, exhibits four distinctive clusters, based on analysis of research income,organization size, percentage of postgraduates, and RAE 2008 scores: A stark divisionbetween older and newer universities is still evident as well as the consolidated position ofOxford and Cambridge as the most elite tier of universities among older universities, thedifference in teaching between older and newer universities is less pronounced (indicating thatnewer universities affirm their mission as teaching-led universities), the Russell Group uni-versities do not form an elite group distinctive from their older counterparts, and finally, newuniversities form a fourth cluster with less resources and attended by less socioeconomicallyadvantaged students. Such studies confirm the maintained binary division between old andnew universities, while at the same time emphasizing the institutional diversification amongresearch-led and teaching-focused universities.Our sample is composed of roughly half older and half newer universities (those organi-zations whose status as university was conferred by the Further and Higher Education Act1992 or later). Analyzing their position in the Times Higher Education Research Intensity 20141 University College of London submitted two submissions—Institute of Education and Medical Education. Inthis analysis, we only counted the submission related to the Institute of Education.Higher Education (2020) 79:829–846 835GPA rank order, we evaluate their organizational vocabularies and their uses of R&R touncover patterns of homogenization and differentiation among 75 SoE. We next turn to theresults.Strategic behavior among UK Schools of Education within a competitiveand stratified institutional environmentHere, we show the results related to the shaping of the competitive and stratified institutionalenvironment for UK universities and the resulting shifts in strategic behavior of Schools ofEducation. The first part is based on the expert interviews, while the second also derivesfindings from the website analysis of the uses of ratings and rankings by the 75 SoE.Shaping the competitive institutional environment for Schools of EducationWithin UK higher education, the interviewees were unanimous in flagging two decisive momentsmarking the intensification of competition. The first relates to the turning point in the fundingarrangements of the higher education sector; the second concerns the introduction of a researchevaluation system that has continuously evolved, becoming stronger and more formalized. TheFurther and Higher Education Act (1992) replaced the previous UK-wide funding body, theUniversities Funding Council, and created bodies to fund higher education in England andWales,with impact on the funding arrangements in Scotland and in Northern Ireland. Moreover, thelegislation ended the binary division between universities and polytechnics by granting the latteruniversity status. The second turning point refers to the introduction of the research evaluationsystem and its research rating system designed to distribute research funding accordingly tocriteria and indicators of quality judged by peer review (for a comprehensive study of theevolution of the research evaluation system, see Marques et al. 2017). Its genesis can be tracedto 1986, under the umbrella of the Universities Funding Council, but the first UK-wide exercisewas conducted in 1992, under the jurisdiction of the funding bodies in the UK’s four nations.Previously, the distribution of funding was allocated in block grants solely to universities withoutdistinguishing funds for research or teaching.This research evaluation system institutionalization and the inclusion of new organizations(new universities) as competitors for research funding are understood as critical junctures, thegenesis of rising competition within UK higher education as an institutional environment inwhich research is viewed increasingly as a commodity:“…suddenly it introduced a competitive element in terms of relating funding to outputsof different kinds… it introduced a whole sector of new practices related to the judgmentof quality related to indicators or measures. This reduction of quality to a 1, 2, 3, 4measurement framework. It was very alien. But had an enormous impact in the waypeople thought about their work, which intensified over the different RAE exercises.And it explains fundamentally a translation of research into a commodity. It was aprocess of commodification. Where the research was increasingly judged not in terms ofintrinsic worth, but in terms of its performative measures and its income generation. So,during this period, from the 1990s, income generation was becoming increasinglyimportant, and the research was one of the ways income has been generated” (FM10).836 Higher Education (2020) 79:829–846Throughout the institutionalization of the exercise (RAE 1996, RAE 2001, RAE 2008, andREF 2014), numerous changes restructured the system to formalize and standardize proce-dures among panels and between disciplines or units of assessment (see Marques et al. 2017).Two important moments in the intensification of the competition within the field are changesin the rating scale and the government’s decision, in 2003, to concentrate the allocation offunding in the top-rated universities, exacerbating stratification. In RAE 1996 and RAE 2001,the research evaluation system allocated research funding based on a seven-point rating scale(1, 2, 3b, 3a, 4, 5, 5*) attributing to each department an overall profile. From RAE 2008onwards, the rating is distributed in percentages for each point (e.g., 3 = 25%), and based of arating scale that ranges from 0 to 4: “unclassified,” “below the standard of nationallyrecognized work,” 1* as “nationally recognized,” 2* as “internationally recognized,” 3*“internationally excellent,” and 4* as “world-leading” (REF 2014). Before 2003, departmentsthat achieved a 3a were awarded funding, while in the following years only departments ratedas 4 (RAE 2001) and 3 (RAE 2008 and REF 2014) were granted research funding. Thispressured universities to make sure that they reach at least the rating that would secure theallocation of some funding:“Unfortunately, the government said we had to move to only rewarding the very best.That didn’t actually shift much money around, and it had the drawback of telling peoplethat what we classified as two-star research wasn’t worth funding. And that’s veryunfortunate... I would be very proud if I was doing two-star research, you are still doingstuff that moves the discipline forward, which is quite satisfactory. However, accordingto the government’s decision you concentrate more money on the best” (FM22).“The idea is that originally, you know, research was rated as one, two, three, four, five,and even two-rated institutions got some money. And what’s been happening over thepast 20-30 years is that this funding has become more and more selective. So now onlythe very, very top institutions are getting any REF-funded money” (FM16).A second development was the introduction of a transparency policy in 2001 to ensure thatRAE/REF submissions and results are available to the wider public, which led to the use ofratings results by media organizations. While Higher Education Funding Councils(HEFCs) in the UK produce ratings to inform and legitimate the allocation of funds,media organizations produce rankings to make profits, often through sales of newspapers,magazines, guidebooks, and online sources. Also, since 2001, the Times Higher Educationhas created RAE league tables (for RAE 2001) and “tables of excellence” (for REF 2008and for REF 2014), contributing to the mediatization of these scientific evaluations. In theRAE 2001’s league table, only universities were ranked. In RAE 2008, not only univer-sities themselves, but also individual “Units of Assessment” were ranked, through thecalculation of a grade point average (GPA) for each unit, department, and, finally,university. In the REF 2014, the same rationale was applied, with the addition of a newcategory called “research power” to tie-break universities with the same GPA. The result isbased on the multiplication of a GPA and the total number of full-time equivalent staffsubmitted to the REF. Similarly, The Guardian newspaper has published RAE/REFrankings since 2001. For the last exercise, The Guardian used the “power rankings,”created by Research Fortnight, to determine the total funding allocation for eachorganization.Higher Education (2020) 79:829–846 837The gradual evolution of scrutiny of media companies to not only rank universities’performance but also that of departments and schools can be understood as “shaping forces,”that exacerbate the competitiveness of the environment in which universities and schools areembedded:“But be aware that big tables are constructed by the journalists. Now, it’s not our job, Iam afraid, to make it easy for anyone in the league tables to win… I think in practicethere’s a project system as well and it is more extreme in most ways than the system werun. All people have got to do is publish ideally four, it might produce less, outputs. Ifyou publish four decent outputs, there is no pressure doing anything else” (FM22).“Oh, enormously, enormously [media rankings on the intensification of competition].What happens is, it makes these sorts of rankings dominate everything, because you’vegot these rankings out there. So, I think it’s had a huge impact” (FM17).Competition is explained less by the anticipation of consumers’ needs, but rather by the “thirdparties that set the framing competition, thereby linking potential competitors to each other”(Hasse and Krücken 2013: 183). Here, we identify the roles of peer review panels and HigherEducation Funding Councils in producing ratings and the media organizations in the produc-tion of rankings as the third parties that link Schools of Education as direct competitors in anorganizational field. We next turn to the strategic behavior of Schools of Education in order togain resources and achieve legitimacy within the field. How do SoE react within such anincreasingly competitive institutional environment?All leaders? Institutional effects of R&R and the strategic behavior of Schoolsof EducationPresenting findings, we analyze here the strategic behavior of Schools of Education, examin-ing particularly the “organizational vocabulary” that they use to describe their activities, andthe use of diverse ratings and rankings to show how they compete for material and symbolicresources, such as reputation and legitimacy, within the field. We complement such resultswith the interviews. While we identify isonymism and, to a certain extent, isopraxism, we alsodistinguish important differences between the 75 SoE that participated in the last REF. Figure 1shows the organizational vocabulary used among UK Schools of Education, taking intoconsideration their position in the Higher Education REF 2014: Subject ranking on intensity– 2014 GPA rank order, which is produced based on the results of REF ratings. The top 20 iscomposed solely of older universities, and only four new universities are within the top 40.The lower positions, by contrast, are filled only by new universities.Despite such stratification of the SoE, the results show that no matter the position in theleague tables, SoE make use of terms of singularity to describe their organizational capacity inthe competitive environment in which they are embedded. Therefore, designations such as“leader” (52%), “excellence” (44%), “quality” (35%), “reputation” (32%), “impact” (16%),“innovative” (16%), “top” (16%), “strong” (13%), or “thriving” (8%) can be found asimportant designators used by SoE ranging from the highest to the lowest positions. Thisresult shows isonymism—adoption of the same organizational vocabulary—among SoE intheir attempts to gain symbolic resources. Even with the common use of certain adjectives to838 Higher Education (2020) 79:829–846Organizational VocabularyLeaderExcellenceQualityReputationTopWorld ClassInnovativeImpactStrongLargestThriving 1-20 21-40 41-60 61-75Source: Authors’ data collection and presentation.Fig. 1 Organizational vocabulary among UK Schools of Education according to their position in the TimesHigher Education REF 2014: Subject ranking on intensity – 2014 GPA rank order) (n = 75)show their “marks of distinction” and prestige, there is variation in their use. SoE that occupythe highest positions in the table place special emphasis on affirming their position as“leaders,” their “reputation,” their classification in the table as “top” organizations, or theirstatus as “world-class” schools. In contrast, SoE positioned in the lowest ranks rather placestrong emphasis on the “quality” and “excellence” of their activities, especially in relationshipto teaching, and the “thriving” status of their work. Such results show that while those in thehighest positions make these references to maintain their position, those in the lowest refer totheir developmental trajectory (on the move) and future status. In fact, interviewees stated thatpost-1992 universities understand the evaluation as a way to improve their standing, to provethat they are “REF-able,” and ultimately, to increase their inter-organizational legitimation inthe field as research providers within a highly stratified higher education system:“They are pretty much all very enthusiastic about the system because they all manage toget reputation. They all got some area, where something has happened, they may notearn much money, but at universities like that the research money is a drop in the ocean,compared to the student money. They are not in the research game for money. They arein it for reputation” (FM22).Another important result relates to the homogenization in the use of words such as “excel-lence,” “impact,” and “quality” that are so entrenched in the REF vocabulary. Despitevariations in use of certain terms by SoE, “excellence,” “impact,” and “quality” are designatorsthat are evenly used by SoE, no matter their position in the table, reflecting the adherence ofSoE to the research evaluation system and its considerable and growing impact on UK highereducation since the mid-1980s. Figure 2 provides a clear picture of how “marks of distinction”are translated into concrete results and usages. Overall, 68% of our sample (51 SoE) makesexplicit reference to some form of R&R, while the rest make no reference to them. Moreover,37% of those SoE that do make reference to R&R opt to refer to both forms. We observe thatno matter the position in the league table, references to both ratings and rankings are found.Most importantly, within those SoE that mention R&R, the vast majority of SoE behavesimilarly in using the REF definitions of starred levels for the produced outputs (88%):Higher Education (2020) 79:829–846 839Uses of Ratings and Rankings90%80%70%60%50%40%30%20%10%0% 1-20 21-40 41-60 61-75REF Ratings Other Ratings REF Rankings Other RankingsSource: Authors’ data collection and presentation.Fig. 2 References to R&R according to the position of Schools of Education in the Times Higher Education REF2014: Subject ranking on intensity – 2014 GPA rank order) (n = 75)“nationally recognized,” “internationally recognized,” “internationally excellent,” and “world-leading” are by far more used than others, such as references to the scale (1*, 2*, 3*, 4*)(20%), “impact” (33%), or the vitality and sustainability of the unit in the “environment”criteria (22%):“This gives our Unit of Assessment a Grade Point Average (GPA) of 3.3 which means thatwe are ranked 5th in the field of education nationally and ranked joint 1st in the UK for world-leading research impact” (Durham, 1–20).“Our educational research was rated highly in the UK-wide Research ExcellenceFramework (REF), with 76% of our research rated as internationally recognised,internationally excellent, or world-leading. Overall, Edge Hill U. was listed as thebiggest improver in the league table published in The Times (+33 places)” (Edge Hill,40-60).Yet simultaneously with such homogenization, important differences also deserve mention.One important difference is that the SoE ranked in the highest positions make explicitreferences to both REF-based media rankings (85%) and to the REF rating (80%), whichmeans that almost every single SoE that is placed in the first 20 places shows their “mark ofdistinction” in the research evaluation system either by stating their position in the ranking,their rating or both. A second key difference is that other forms of ratings, such as that of theOffice for Standards in Education (OFSTED),2 related to the quality of teacher educationprovision is mentioned by organizations across the board, except by those SoE that rangebetween 20th and 40th position. This shows that SoE in the highest but also in the lowestpositions strategically show their “marks of distinction” in both teaching and research activ-ities, while others only identify their achievements in research. A third noteworthy difference isrelated to the use of rankings. Seven out of nine rankings referred to were produced by media2 The Office for Standards in Education was created in 1994. Among other functions, it has inspected initialteacher education since 1996 through a framework that aims to assess the quality of initial teacher educationcourses through several criteria graded on a 1–4 scale (1—outstanding; 2—good; 3—requires improvement; 4—inadequate). Only Schools of Education in England that provide initial teacher training come under thejurisdiction of the OFSTED.840 Higher Education (2020) 79:829–846organizations for the national market and scope, such as The Times Good University Guide(17%), the Complete University Guide (14%), the National Student Survey (11%), theGuardian University Guide (8%), the Russell Group Ranking (3%), and the Good TeacherTraining Guide (3%). Only two global rankings were mentioned: the QS World UniversityRankings (17%) and the The World Ranking (6%). This shows the contextualized nature ofeducation as a nation-oriented discipline and the organizational field relationships of SoEwithin their national or regional scope of action. Moreover, SoE placed in the highest positionsuse more comprehensive international or national rankings, while those in middling or lowestpositions mentioned rankings related to teaching activities. The emphasis placed in certainrankings can be seen as an indicator of the SoE orientation towards more teaching-led orresearch-led and more international or national profiles: “Of course the REF ranking isimportant, but our aim is the QS ranking” (FM15).As we have seen, both forms (R&R) play an important role in defining the image(s) of SoEeither to maintain “marks of distinction” or to strive for higher marks in the case of newuniversities. But not only newer universities struggle for legitimacy in comparison to theirolder counterparts. The multidisciplinary field of education also must compete for intra-organizational resources and reputation in comparison to other SoE and units of assessment.Moreover, several interviewees reported that being highly-rated/ranked help SoE to gaininstitutional legitimacy (FM3, FM19, FM20) within their university:“…and we came X (within the first 20 places), which was, for us, very low, very low...The majority of the departments of X were on the top ten or top five, on top” (FM17).“Education departments are not well regarded in universities. Education is a field thathas a lot of academic snobbery within and across universities. Quite often, universitiesare looking to close their Education departments and being highly ranked has made adifference. I’ve found that in X, I’ve found that here and certainly here in the universityI’ve found that all sorts of doors are opened... So, that’s really, really helpful” (FM3).Because ratings and rankings forms “…are very sticky results. They are very tenacious, sincethey stay with you for a very long time. Even after the next exercise…” (FM2). The desire toremain or achieve the top positions is perceived as something that creates tension and triggersstrategic behavior among Schools of Education. Such strategies can range from internalpractices on the management of research production to more profound changes on the overallstructures of departments in research management. Regarding the latter, despite the fact thatthe RAE/REF did not directly trigger major restructuration, certainly it has influenced thearguments for re-organizing structures, such as separating teaching from research, setting upnew structures entirely dedicated to research (management) or to attract research income tosustain jobs (FM4, FM20):“What has been built was a really imaginative and potentially transformative structure toenvision what education can look like within the university… that does lead to signif-icant resourcing in order to perform to its potential within the time and the space of thenext Research Excellence Framework” (FM20).Higher Education (2020) 79:829–846 841Regarding informal practices, members of the field identified the set-up of internal peer reviewassessments, including a more focused strategy devised to help colleagues’ career progressionthrough co-writing processes as well as coaching (FM3, FM14, FM16, FM21). Perhaps mostinteresting is a convergence of results about the set-up of internal evaluations in which seniorscholars read and rate the papers to be submitted for evaluation—in essence, performinginternal quality control prior to the official submission (FM2, FM6, FM10, FM21). Thisexemplifies isopraxism—the implementation of similar practices across organizations. There-fore, field members have become more strategic in the selection of which staff members’publications should be submitted from one exercise to another, hoping to maximize thepossible ratings of publications and diminish potential threats to the overall quality profile oftheir SoE:“And then we would have a very highly advanced data section and we were looking ateverything, looking at CVs and stuff. The chair of the faculty serves as a kind ofgenerally friendly interrogation of ‘ok, what are you doing?’ … The fundamentalquestion is who do you submit, that’s the first thing, really. So, you have to assesseverybody’s work… Every university, I think, will have some system of trying to ratewhat they think the panel will rate” (FM21).A third strategic feature enunciated refers to recruitment policies and procedures influenced byRAE/REF and decision-making when hiring researchers: “So, I think in any appointment youmake on the research, on the academic side, it was always having in mind REF, as one of thecriteria” (FM3). On one hand, there is evidence to create specific profiles or at least to hirepeople with certain characteristics, such as strong quantitative skills (FM4, FM16). Thus,“universities were pushed to buy researchers” (FM10) leading to a transfer market of knowl-edge producers (FM4, FM8, FM9, FM16, FM20):“What happened is that you basically got a transfer market for professors set up. So, auniversity realised that if it had very highly productive professors, the money comingback as a result of the research assessment exercise, and the extra research funding thatcomes with it, you actually make a profit on that professor” (FM8).Finally, a more recent strategy is related to the implemented evaluation criteria of “impact” inthe REF 2014 exercise and one of the main concerns for the next one in 2021, in which thisdimension will count for 25%. Despite the recent inclusion of such criteria, several fieldmembers refer to the perceived effects that the notion of “impact” has triggered within the SoE:Strategic behavior in selecting which cases to have assessed and how to write them, andreorienting the direction of research towards questions and designs that enhance usability ofresearch results in other societal spheres:“Institutions are constantly on the lookout for potential impact case studies. They start todraft them and refine them from very early on, so when they come to the next researchassessment, whenever it is, they’ve got several they can choose from and will make, youknow, political choices about which ones to strengthen even more. It has changed a lotof behavior” (FM2).842 Higher Education (2020) 79:829–846Discussion and conclusionsOur analysis of the shaping of the competitive institutional environment in which UK Schoolsof Education are embedded highlighted the “third parties that set the framing competition”(Hasse and Krücken 2013: 183) among them. In the context of SoE in the UK, these thirdparties are the Higher Education Funding Councils that organize the rating of research qualityand the media organizations that convert these research ratings into rankings to be marketed fortheir own profits. The former set the frame of competition related to material resources—research allocation based on indicators of quality derived through peer review. The latter, asclassification mechanisms (Wedlin 2006), contribute to the on-going stratification of highereducation by sorting SoE into high, middle, or low positions, increasing the competition andthe pressures for SoE to maintain their status and/or strive for reputation.By analyzing the organizational vocabulary of SoE and the uses that they make of R&R, weobserve their embeddedness in the highly competitive institutional environment of UK highereducation and their strategic behavior in negotiating this environment. Concretely, we showedthat no matter the position of the SoE in league tables, the majority of SoE define themselvesas “leaders” or among the “top” with obtained “reputation” in their research and/or teachingactivities within a “strong” or “thriving” environment. While the analysis of the “organiza-tional vocabulary” (Meyer and Rowan 1977) reflects isonymism—adoption of the similarterms (Erlingsdóttir and Lindberg 2005)—among SoE, we also find variation among SoE thatreflects the stratified and increasingly competitive institutional environment in which SoEoperate.Perhaps the most interesting result is related to the uniformity—isonymism—among SoE inthe use of REF-based vocabulary such as “excellence,” “quality,” and “impact” in theirorganizational descriptions. Such homogeneity is also found in their strategic use of myriadR&R, despite the clear positioning of SoE in the league tables. On one hand, such results showthat SoE make use of their “marks of distinction” in order to secure or strive for reputationwithin their environment, particularly visible in the use of the REF rating definitions of thestarred levels to characterize the quality of research they produce. “World leading,” “interna-tional excellence,” or “recognized internationality” are used to characterize not only theproduct but also the producer. This “repackaging of university performance” (Sidhu et al.2011) in narratives of prestige, words such as these have officially become part of the self-characterization of the SoE.We also found considerable variation in the use of such forms, especially concerning thescope of activities (teaching or research) and the regional, national or international levelsreferenced, confirming the findings of the study of UK universities’ welcome addresses byMampaey and Huisman (2016). While the SoE at or near the top are older and well-establishedschools, those closer to the bottom are younger, with less time to have grown their programs,recruited renowned faculty members, and established their reputations. Several importantvariations we noted confirm prior research: organizations at the top make more use ofrankings, while those in the middle position make strong references to the REF ratings,indicating behavior of striving for as-yet unattained reputation and prestige (Schofield et al.2013). Still others explain their position with such statements as: “We are one of the youngestuniversities in the UK but we are already leading the way in adding value to society, which wecall social impact” (Northampton, ranked 65th). Indeed, the next cycle of the REF in 2021 willaward fully 25% for “impact,” thus potentially reshuffling the ratings awarded and thus therankings based upon them. This exemplifies the dynamic nature of the competitiveHigher Education (2020) 79:829–846 843institutional environment in which SoE are embedded. Such reshaping effects have also beendemonstrated in the field of business schools (Wedlin 2006). Moreover, the analysis hasconfirmed that SoE not only compete for legitimacy inter-organizationally—newer universitiesattempt to strive for reputation in relationship to older and more established ones—but alsointra-organizationally, by attempting to attain more legitimacy and counteract the often-lamented marginality of education within larger university structures. Taking into consider-ation that universities strategically allocate the “quality-related” research funding among theirsubunits, we call attention to pursue future avenues of research that take into considerationthe effects of ratings, rankings, and research evaluation in the on-going organizationalsegmentation of universities—and the resultant unequal concentration of resources—discussedin recent literature (Cantwell and Taylor 2013; Rosinger et al. 2016).Despite on-going critiques of REF, the competitive setting of ratings and rankings not onlycreates “marks of distinction” but also triggers strategic behavior among SoE, within theconfines of a highly stratified institutional environment that emphasizes organizationalactorhood. Rankings as mechanisms of classification have the capacity to make or breakreputations and reallocate status, triggering tensions and exacerbating struggles within the field(Wedlin 2006). Through the interviews, we identified contemporary strategic behaviors. Theseranged from changes in the structures of schools concerning research management (separationof teaching and research and setting up new structures for research management) to internalpractices (internal peer review assessments, co-writing and coaching, setting-up of internalevaluations, recruitment policies, and the selection of case studies to show “impact”). Whilewe observe isonymism and, to a certain extent, isopraxism among SoE, our data do not yetconfirm isomorphic consequences in terms of structural changes among disciplinary unitswithin the universities studied. Longitudinal analyses of such organizational structures toassess isomorphism across and between organizations is necessary. Such a research avenuecould also contribute to our understanding of organizational segmentation and on-goingstratification, and the strategic behavior of universities’ subunits as they strive to maintainand gain status.Overall, these results demonstrate how universities’ subunits such as SoE became “organi-zational strategic actors” (Krücken and Meier 2006; Hasse and Krücken 2013) that expandtheir internal management capacities and appear as integrated and goal-oriented units jockey-ing for position. Therefore, we also consider that the concept of universities as “organizationalstrategic actors’ may be extended to university subunits and we call attention to neededexplorations of the horizontal and vertical relationships between such subunits, especially inhighly competitive environments. This study contributes to our understanding of contemporaryhigher education system dynamics by uncovering how ratings are converted into rankings andhow this process in turn triggers strategic behavior that impacts universities and (multi-)dis-ciplinary subunits, such as Schools of Education, within them. It also calls attention to the needto study disciplinary subunits as it emphasizes the importance of contextual frames ofcompetition and their effects on inter-university and intra-university stratification and reputa-tion alike.Acknowledgements The authors thank Jennifer Dusdal, Mike Zapp, and the anonymous reviewers for theirhelpful and constructive comments on earlier versions of this paper.844 Higher Education (2020) 79:829–846Funding This research was funded by the University of Luxembourg’s tandem program for interdisciplinaryresearch for the Project “The New Governance of Educational Research. Comparing Trajectories, Turns andTransformations in the UK, Germany, Norway, and Belgium” (EDRESGOV).Compliance with ethical standardsConflict of interest The authors declare that they have no conflicts of interest.Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 InternationalLicense (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and repro-duction in any medium, provided you give appropriate credit to the original author(s) and the source, provide alink to the Creative Commons license, and indicate if changes were made.ReferencesAldrich, H. E., & Ruef, M. (2006). Organizations evolving (2nd ed.). London: SAGE.Ball, S. (2017). Governing by numbers. Abingdon: Routledge.Bastedo, M. N., & Bowman, N. A. (2010). The U.S. News and World Report college rankings: modelinginstitutional effects on organizational reputation. American Journal of Education, 116(2), 163–184.Biesta, G. J. J. (2011). Disciplines and theory in the academic study of education. Pedagogy, Culture & Society,19(2), 175–192.Bloch, R., Mitterle, A., Paradeise, C., & Peter, T. (eds.) (2018). Universities and the production of elites.Basingstoke: Palgrave Macmillan.Boliver, V. (2015). Are there distinctive clusters of higher and lower status universities in the UK?Oxford Reviewof Education, 41(5), 608–627.Bowman, N. A., & Bastedo, M. N. (2009). Getting on the front page: Organizational reputation, status signals,and the impact of U.S. News and World Report on student decisions. Research in Higher Education, 50(5),415–436.Brankovic, J. (2018). The status games they play: unpacking the dynamics of organizational status competition inhigher education. Higher Education, 75(4), 695–709.Brankovic, J., Ringel, L., & Werron, T. (2018). How rankings produce competition: the case of global universityrankings. Zeitschrift für Soziologie, 47(4), 270–287.Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2),77–101.Cantwell, B., & Taylor, B. J. (2013). Global status, intra-institutional stratification and organizational segmen-tation: a time-dynamic Tobit analysis of ARWU position among U.S. universities.Minerva, 51(2), 195–223.Clarke, M. (2007). The impact of higher education rankings on student access, choice, and opportunity. HigherEducation in Europe, 32(1), 59–70.Collins, F. L., & Park, G. S. (2016). Ranking and the multiplication of reputation. Higher Education, 72(1), 115–129.Dill, D., & Soo, M. (2005). Academic quality, league tables, and public policy: a cross-national analysis ofuniversity ranking systems. Higher Education, 49(4), 495–533.DiMaggio, P. J., & Powell, W. W. (1991). The iron cage revisited: institutional isomorphism and collectiverationality in organization fields. In W. W. Powell & P. J. DiMaggio (Eds.), The new institutionalism inorganizational analysis (pp. 63–82). Chicago: University of Chicago Press.Dobbins, M., & Knill, C. (2014). Higher education governance and policy change in Western Europe.Basingstoke: Palgrave Macmillan.Drori, G., Delmestri, G., & Oberg, A. (2016). The iconography of universities as institutional narratives. HigherEducation, 71(2), 163–180.Erlingsdóttir, G., & Lindberg, K. (2005). Isomorphism, isopraxism, and isonymism: complementary or compet-ing processes? In B. Czarniawska & G. Sevón (Eds.), Global ideas: how ideas, objects and practices travelin the global economy (pp. 47–70). Malmö: Liber.Espeland, W. N., & Sauder, M. (2009). The discipline of rankings: tight coupling and organizational change.American Sociological Review, 74(1), 63–82.Espeland, W. N., & Sauder, M. (2016). Engines of anxiety: academic rankings, reputation, and accountability.New York: Russell Sage Foundation.Higher Education (2020) 79:829–846 845Furlong, J. (2013). Education—an anatomy of the field. Rescuing the university project. London: Routledge.Gonzales, L., & Núñez, A.-M. (2014). The ranking regime and the production of knowledge. Education PolicyAnalysis Archives, 22(31), 1–24.Hasse, R., & Krücken, G. (2013). Competition and actorhood: a further expansion of the neo-institutionalagenda. Sociologia Internationalis, 51(2), 181–205.Hazelkorn, E. (2011). Rankings and the reshaping of higher education. Basingstoke: Palgrave Macmillan.Huisman, J., & Mampaey, J. (2015). The style it takes: how do UK universities communicate their identitythrough welcome addresses? Higher Education Research and Development, 35(3), 502–515.Hüther, O., & Krücken, G. (2016). Nested organizational fields: isomorphism and differentiation amongEuropean universities. In E. Popp Berman & C. Paradeise (Eds.), The university under pressure (pp. 53–83). Bingley: Emerald.Kosmützky, A., & Krücken, G. (2015). Sameness and difference. Analyzing institutional and organizationalspecificities of universities through mission statements. International Studies of Management &Organization, 45(2), 137–149.Krücken, G., & Meier, F. (2006). Turning the university into an organizational actor. In G. Drori, J. W. Meyer, &H. Hwang (Eds.), Globalization and organization: world society and organizational change (pp. 241–257).Oxford: Oxford University Press.Lagemann, E. C. (2000). An elusive science: the troubling history of education research. Chicago: University ofChicago Press.Lawn, M., & Furlong, J. (2009). The disciplines of education in the UK. Oxford Review of Education, 35(5),541–552.Mampaey, J., & Huisman, J. (2016). Branding of UK higher education institutions. Recherches Sociologiques etAnthropologiques, 41(1), 133–148.Marques, M., Powell, J. J. W., Zapp, M., & Biesta, G. (2017). How does research evaluation impact educationalresearch? Exploring intended and unintended consequences of research assessment in the United Kingdom,1986–2014. European Educational Research Journal, 16(6), 820–842.McCulloch, G. (2017). Education: an applied multidisciplinary field? The English experience. In G. Whitty & J.Furlong (Eds.), Knowledge and the study of education: an international exploration (pp. 211–229). Oxford:Symposium Books.Meyer, J. W., & Rowan, B. (1977). Institutionalized organizations: formal structure as myth and ceremony.American Journal of Sociology, 83(2), 340–363.Meyer, J. W., Ramirez, F. O., Frank, D. J., & Schofer, E. (2007). Higher education as an institution. In P. J.Gumport (Ed.), Sociology of higher education (pp. 187–221). Baltimore: John Hopkins University Press.Oancea, A. (2008). Performative accountability and the UK Research Assessment Exercise. ACCESS: CriticalPerspectives on Communication, Cultural & Policy Studies, 27(1/2), 153–173.Power, M. (1997). The audit society: rituals of verification. Oxford: Oxford University Press.Ramirez, F. O. (2013). World society and the university as formal organization. Sisyphus, 1(1), 124–153.Rasche, A., Hommel, U., & Cornuel, E. (2014). Discipline as institutional maintenance: the case of businessschool rankings. In A. Pettigrew, E. Cornuel, & U. Hommel (Eds.), The institutional development of businessschools (pp. 196–218). Oxford: Oxford University Press.Rosinger, K. O., Taylor, B. J., Coco, L., & Slaughter, S. (2016). Organizational segmentation and the prestigeeconomy: deprofessionalization in high- and low-resource departments. The Journal of Higher Education,87(1), 27–54.Schofield, C., Cotton, D., Gresty, K., Kneale, P., & Winter, J. (2013). Higher education provision in a crowdedmarketplace. Journal of Higher Education Policy and Management, 35(3), 193–205.Scott, W. R. (1992). Organizations: rational, natural, and open systems. Englewood Cliffs, NJ: Prentice-Hall.Scott, P. (2001). Triumph and retreat. In D. Warner & D. Palfreyman (Eds.), The state of UK higher education(pp. 186–204). Buckingham: SRHE.Scott, P. (2009). Structural changes in higher education: the case of the United Kingdom. In D. Palfreyman & T.Tapper (Eds.), Structuring mass higher education (pp. 35–55). New York: Routledge.Scott, W. R. (2013). Institutions and organizations. Thousand Oaks, CA: SAGE.Seeber, M., et al. (2015). European universities as complete organizations? Public Management Review, 17(10),1444–1474.Shattock, M. (2012).Making policy in British higher education 1945–2011. Maidenhead: Open University Press.Sidhu, R., Ho, K. C., & Yeoh, B. (2011). Emerging education hubs: the case of Singapore. Higher Education,61(1), 23–40.Tight, M. (2007). Institutional diversity in English higher education. Higher Education Review, 39(2), 3–24.Tight, M. (2009). Higher education in the United Kingdom since 1945. Maidenhead: Open University Press.Tushman, M. L., & Scanian, T. J. (1981). Boundary spanning individuals. The Academy of Management Journal,24(2), 289–305.846 Higher Education (2020) 79:829–846Wedlin, L. (2006). Ranking business schools. Cheltenham: Edward Elgar.Zapp, M., Marques, M., & Powell, J. J. W. (2018). European educational research (re)constructed—institutionalchange in Germany, the United Kingdom, Norway, and the European Union. Oxford: Oxford SymposiumBooks.Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps andinstitutional affiliations.