Chapter 2 Introduction

2.1 A hyper-competitive academic landscape

Trends show increases towards larger scientific teams (Wuchty, Jones, and Uzzi 2007; E. B. Araújo et al. 2017), higher number of publications (Ioannidis and Klavans 2018; Moed, Bruin, and R. 1991; Kim 2006), growth of citations (Zhou and Bornmann 2015) and increasing expansion of the academic system in terms of students and faculty members (Young 1995; Lomperis 1990). The increase in size and dimensions of the academia (Smith and Adams 2008; Shepherd 2017) has inspired a culture of quantification and numbers (e.g., funding and grants, publications and citations). This hegemony of numbers gives rise to a higher level of competition (Edwards and Roy 2017) in different aspects of academic research. From the early steps of the academic career (e.g., choosing a research subject) to the last (e.g., publishing and disseminating research results), attention is focused on numbers and output (Jonkers and Zacharewicz 2016).

The image of a sole scientist spending years in the lab to answer a single question and publishing a book presenting his solitary scientific effort seems out of time now (Leahey 2016). For instance, an anthropologist spending years living and observing the life of a small tribe, taking notes of people’s behaviour and spending years for presenting his/her work and writing an influential monograph is a not so popular academic model, perhaps possible only in prestigious and well-funded scientific departments. Pressures for funds, recurrent evaluations and multiple tasks seem to conspire against academic freedom with rare examples probably of link between quality and efficiency (e.g., Sandström and Van den Besselaar (2018); Mallapaty (2018)).

It is worth noting that the dichotomy of quantity versus quality and the hegemony of quantitative research evaluation are leading academics towards salami slicing and myopic projects (e.g., Dupps and Randleman (2012); Smolčić (2013)). Furthermore, the importance of citations and quantifiable recognition increases the importance of building invisible colleges (Crane 1972) and cartels to cite each other as much as possible (Fister Jr, Fister, and Perc 2016). This leads to a goal displacement in that research follows evaluation priorities more than curiousity and genuine interest (Rijcke et al. 2016).

In other words, it seems that today evaluation has become a goal in itself (Marginson and Van der Wende 2007). There is evidence suggesting that academics could be inclined to maximize the impact of their research on evaluation, so trying to game the system (Garcia 2018). The so called pathological publishing and CV manipulation (Casal 2014) and increasing number of scientific fraud and mis-behaviour are examples of this (Fang, Steen, and Casadevall 2012). Considering that disciplinary standards of excellence are often quantified, academics could arrange their research and publication strategies to maximize their impact (Lamont 2009). This in turn leads to higher levels of stress, burnout (Gill 2009) and the fear of missing out on newest scientific outcomes and being behind the frontier (Hanlon 2016). This type of over-competitiveness increasingly demotivates scientists (Carson, Bartneck, and Voges 2013).

Five years ago, a senior Nobel prize winner, Peter Higgs said he could not make a life and survive in such a competitive academia (Aitkenhead 2013b, 2013a). Only a few of the scientists worldwide could manage to be hyper-prolific to publish an article every five-calendar days and stay above the average in overall publication rates (Ioannidis and Klavans 2018). Even the highest number of publications can not guarantee any impact on the scientific community. Occupying status of an impressive scientist requires that academics publish in diverse substantive areas. It might then draw higher attention from different sub-communities (e.g., Merton is named as an example of a scientist with this status by Hargens (2004)).

This applies into funding, promotion and reputation, which have drastically changed in the 21st century. Defining who to hire, fund or promote is a matter of evaluation and metrics (Wilsdon et al. 2015; Edwards and Roy 2017; Nederhof 2006; Leahey, Keith, and Crockett 2010; Long 1992; Grant and Ward 1991). The vicious circle of funding scientific projects based on previous scientific activities (Clauset, Larremore, and Sinatra 2017) often results in lower funds for junior researchers and higher funds for senior renown scientists (Merton 1968). This has increased the tendency of doing publishable research and forgetting interesting, genuine challenges (Wang, Veugelers, and Stephan 2017). These constraints are at work when choosing a subject, selecting collaborators and investing in certain sophisticated research methodologies and tools (Rijcke et al. 2016). This also extends to the writing style, with reports that must have a narrative and a format that are compatible with referees and a journal’s audience.

The primacy of quantitative assessments has generated a debate in favour of responsible [use of] metrics (Mejlgaard, Woolley, Bloch, Bührer, et al. 2018; Mejlgaard, Woolley, Bloch, Buehrer, et al. 2018; DORA 2012; Wilsdon et al. 2015; Jonkers and Zacharewicz 2016). Some observers have questioned whether the number of metrics and their use made any sense (Bibliomagician 2018; Marginson and Van der Wende 2007). The importance of responsible use of these metrics has called general attention on research institutions and funding agencies, with the necessity to select appropriate methods that respect the context. The recent discussion about peer review versus bibliometrics, informed peer review, and more recently contextualized scientometrics are signs of a reaction to a self-reinforcing process of irresponsible adoption of evaluation metrics (Baccini and De Nicolao 2016; Ancaiani et al. 2015; Bertocchi et al. 2015).

Informed peer review (Hicks et al. 2015; Abramo and D’Angelo 2011a) is a modified version of peer review in which evaluators have access to complementary information on the sample of academics they are evaluating. For instance, they can view bibliomertic measures on authors, e.g., h-index, or they can identify the journals and the impact factor of them in case of previously published works submitted for evaluation, e.g., the practice adopted by ANVUR in VQR in Italy. On the one hand, this information could give referees a more thorough and complete picture of the academic under evaluation. On the other hand, this can affect their judgment, especially if personal clues, such as gender, age and academic affiliation, enter the picture. Supporters of informed peer review believe that indicators should be used to support peer review rather than replicating it. More recently, Waltman and Van Eck (2016) has proposed contextualized scientometrics as a means to integrate informed peer review and citation indicators to favour disciplinary specific evaluation.

The concept of contextualized scientometrics is an attempt to accommodate the logics of contexts and emerging quantitative standards. That is a relatively new approach proposed to integrate expert judgment and scientometrics indicators. This is an upgraded version of the informed peer review trying to promote three principles, i.e., the context, simplicity and diversity. First, any indicator must be justified by relying on the context in which it is used. For instance, this includes details on the list of publications under evaluation and the definition of fields (in case field-normalized measures are used). Indicators should be as simple as possible and they need to be described clearly. Finally, a diverse array of indicators should be used to complement each other and support peer review (Waltman and Van Eck 2016).

This debate includes also attention to a variety of indicators, which can address also the societal impact of research (Bornmann 2013) and its impact on social media (e.g., see the growing field of “altmetricsWouters and Costas (2012)) However, there is still need for more comprehensive qualitative understanding of how academics themselves interpret priorities and practices of their scholarly activity (e.g., ACUMEN (2010)) as well as their scientific career and publication trajectories (Way et al. 2016; Cole and Zuckerman 1987).

It is worth noting that without a mixture of qualitative and quantitative approaches to evaluation, evaluators can not do a good job in assessing scientist effort (Shapin 2009). It would be even harder to disentangle mechanisms and processes underlying the working of the academic system at more aggregate level, i.e., national assessment. This is confirmed by recent research in which [dis]agreements between peer review and bibliometrics in different contexts have been found that call for the necessity of mixing and complementing different approaches.

For example, in the Italian context, results of a recent national assessment showed contested findings, with certain studies suggesting a good level of agreement between peer review and bibliometric analysis (e.g., Bertocchi et al. (2015)), whereas others found inconsistencies (Baccini and De Nicolao 2016) and concluded that bibliometric indicators are more appropriate in evaluating hard sciences (Abramo, D’Angelo, and Caprasecca 2009a). Findings on the case of the UK’s REF were similarly contested (Harzing 2018), with concerns especially on the size-dependency of indicators used and the normalization applied (e.g., Traag and Waltman (2018)). Main emphasize of these re-evaluations are on the lessons to be learned from evaluation experiences in different contexts to overcome some of the issues raised (Bibliomagician 2018; Sivertsen 2017, 2018).

Here, the point is that academic collaboration does not happen in a social, institutional and organizational vacuum. Different academic contexts could inspire and motivate different collaboration attitudes (Sonnenwald 2007). For instance, imagine a university department with many specialized scientists exclusively working on their own research subject without any contact as if they were atoms colliding each other only in some department meetings. Now, imagine another department more interdisciplinary with a number of groups who are directed to complement each other’s work on their shared subject matters. In the former, if any collaboration between two scholars will be, perhaps, they will collaborate due to multiple different variables. Probably mostly due to their personal propensity. In the latter case though, collaboration will be the general practice and, for instance, solo-authored articles will be the exception more than the rule (Leahey 2016).

To continue with this thought experiment, a network analysis researcher could not study these two idealized departments assuming that in both collaboration triads will naturally tend to close, e.g., a coauthor of one’s coauthors will write probably together. Even in everyday life, we need to study the interaction context before drawing any conclusion on the phenomenon under investigation (Small 2017 p 154). Depending on the space and possibilities of interaction, even strong ties could never meet to close their forbidden triads (Granovetter 1977). This is where the idea of a contextualized study of science is key to study science and scientific activity more sociologically.

It must be noted that the context also includes field and disciplinary practices and social and organizational aspects of academic institutes. In academia, professionalism and managerialism gained momentum today while other ways of coordination of the academic work seem to lose their importance. This is evident when looking at university leaders who previously were hybrid figures half academics half part-time administrators while now are executives (Smith and Adams 2008; Shepherd 2017). This has contributed to the diffusion of managerialism, with universities adopting standards, models and practices similar to private institutions (Musselin 2008; Lamont 2017; Kalfa, Wilkinson, and Gollan 2018). This has implied especially an increasing importance of competitive resource allocation, with “economic” criteria of utility often dominating decisions of academic organisations.

However, while academia has changed towards a competitive context, work relations, communication ethics and coordination of scholarly tasks are still traditional. The bottom line is, the competition level in academic system has upgraded to a 21st century degree, while the ethics, coordination and scholarly communication is under pressure. We are focused on the tension between “publish or perish” from one hand and being part of the “scholarly community” on the other. In different chapters, we studied a variety of embeddedness scenarios to see how sociologists reacted to this hyper-competitive academic landscape.

2.2 Sociological theories

Sociologists react to new social phenomenon in different ways, depth and speed. Their reaction differs both from other fields and between sociological sub-communities. Even considering team science and group-work (Wuchty, Jones, and Uzzi 2007), sociologists seem to adopt slowly (Babchuk, Keith, and Peters 1999) and their behavior is more similar to humanities, e.g., known as sole-scientists (Leahey 2016). This would explain why scientometrics, bibliometrics and information sciences with a focus on science and technology studies have outgrown science studies in sociology. Indeed, science and technology studies’ centers and institutes are populated more by scientometricians, bibliometricians or even physicists, statisticians and computer scientists (Zeng et al. 2017; Clauset, Larremore, and Sinatra 2017) than sociologists (e.g., Center for Science and Technology Studies (CWTS) and German Center for Higher Education Research and Science Studies (DZHW)). This might be because they were more capable of institutionalizing and systematizing their scholarly activity around these subjects (Blume 1987).

While each chapter provides an overview to the most relevant literature, here a brief view to sociological theories is presented. Note that some of these views are relatively far from our quantitative approach causing us to keep this overview brief. Theoretical endeavors in social studies of science (SSS) or with further technological advancements, the renamed version science and technology studies (STS) can be divided in two main groups.

First, we have social theorists who study science from specific theoretical schools without making this their only specialisation. For example, the prevalence of the structural functionalism in the American sociology reflected in the works by Merton and his students on institutional norms and rewards in science (Ritzer 2004), who examined the famous Matthew effect (Merton 1968; Frank and Cook 2010). Their studies emphasized that highly prolific scientists attract higher collaborations from other scientists. These scientists access more funds, which in turn increase their recognition, which in turn expand their collaboration networks and research team, thereby giving rise to self-reinforcing processes (Sheltzer and Smith 2014). The idea that academics tend to attach preferably to a few star scientists (Moody 2004) has been tested in many studies using novel methodologies borrowed from network analysis and graph theory (Newman 2001a, 2001b), stimulating studies on coauthorship (e.g., in articles, books, grant proposals; Zhang et al. (2018); Hou, Kretschmer, and Liu (2007)), membership and attendance (e.g., in research groups, conferences, scientific events, editorial boards and in scientific committees and associations; Sciabolazza et al. (2017); Bellotti, Kronegger, and Guadalupi (2016)). The main process at work is considered to be cumulative advantage, which increases the chance that those renowned scientists gain a higher standing.

Another stream of research has tried to reconstruct the fragmentation of ideas (Abbott 2000; Moody 2004; Leahey 2016), looking for a small-world structure of disconnected islands (Watts and Strogatz 1998). These studies emphasized idea spaces and divisions, either between sub-fields or among neighboring fields. These idea spaces might borrow concepts, methodologies and analytical techniques from each other. For instance, quantitative methodologies and their experts are those expected to intermediate between fields and scientists due to their membership in multiple research groups and their interdisciplinary publication trajectories. This inter-mediation blurs the borders between fields and creates interstitial sciences (Abbott 2001).

Research here has also tried to identify the core of leaders and priphery of followers (Kronegger, Ferligoj, and Doreian 2011; Light and Moody 2011). Studying these collaboration networks requires to apply advanced mathematical and graph theory methods, such as block-modelling (Doreian, Batagelj, and Ferligoj 2005) and community detection (Newman 2006; Mucha et al. 2010; Reda et al. 2011). Interestingly, here these newer methodologies and studies are linked to classic efforts of detecting schools of thoughts and invisible colleges (Solla Price, J., and Beaver 1966; Crane 1972) and their link is similar to sociological efforts to bring-back old concepts in new forms and methodologies (Lizardo et al. 2018). They mainly focused on finding cohesive academic sub-groups which are more tied within themselves than between their group and others. The conceptual machinery has venerable roots in the Durkheimian concepts of social solidarity and cohesion (Durkheim 1893). These groups tend to be connected more to each other and preserve weak ties with others (Granovetter 1977). Note that this is the main underlying hypothesis behind community detection and clustering, i.e., giving lower importance to ties between the communities and rewarding the ties within them.

Finally, research has also focused on the embeddedness and organizational ambiguity in academia (Granovetter 1985; Boffo and Moscati 1998). If the reward system of science (Merton 1968; Hargens 2004) is not clearly defined, standards are redundant and there are multiple criteria regulating rewards and sanctions, it is difficult that such a context motivates scientists in being more productive and pursue high-quality research.

Second, and farther away from our focus in this research project, there are the efforts solely devoted to knowledge, science and other neighboring concepts. The second group could be divided in two main distinctive theoretical and methodological approaches: Sociology of Scientific Knowledge (SSK) and Actor Network Theory (ANT) that sometimes have been labeled as a constructivist view of scientific knowledge (Ritzer 2004).

Our work in following chapters was informed by some of the theoretical views presented above. In each case we tried to present results of confronting the theoretical statement with our empirical results and drive conclusions.

2.3 Discussion

Academics today are embedded in a dual context as if they were living a double life. On the one hand, they have to keep up with quantitative evaluation standards. They have to publish as much as possible under the publish or perish imperative. On the other hand, they are evaluated by their disciplinary community and their peers not merely quantitatively. Not only quality matters and this is in the eyes of the beholder. Teaching and mentoring students are an important part of the identity and life of an academic and these are factors that are also visible to the more approximate peers (e.g., department colleagues). Sometimes, colleagues value more an article perhaps published in a less prestigious journal instead of a top publication because the article has raised a major media and public debate. Sometimes, academics are praised for the voluntary scholarly activity they offer to the community (e.g., peer review, journal editorship etc.). They might receive higher recognition from peers because they received a prestigious grant not only due to the quantity of their previous publications but to the novelty of their project. Community values, unwritten rules and peer evaluations are embedded in academic reward systems that can be institutionally ambiguous, differently mediated by academic organisations and interpreted by individuals (Boffo and Moscati 1998). Academics might be remunerated for different activities they are requested to carry on, some of them even not reflected in any quantitative evaluation since these unwritten rules and cultural aspects of academic work is harder to record and study.

These are some of the dualities and paradoxes that academics have to face in the 21st century. They need to manage this complex academic life. The limited resources and constrains they face might lead scientists to give up their voluntary activities in favor of the ones that will directly affect their performance in research evaluation (Bianchi et al. 2018).

Our dissertation aimed to contribute to understand this puzzle. We focused on different pieces. We tried to reflect on the case of sociologists, both Italian and international, as this is our community. In case of Italian sociologists, we examined their research productivity and the effect of various sources of embeddedness. Furthermore, we focused on the evolution of collaboration and coauthorship networks among Italian sociologists. Finally, we tried to understand the influence of institutional factors, with a particular focus on national research assessments. We concentrated on the effect of VQR 2004-2010 by ANVUR on sociological research in Italy.

We focused on the issue of institutional ambiguities as well. The paradoxical and confusing stimuli that Italian sociologists (and other academics in Italy) receive. On the one hand, they are inspired to take part in collective and group research work, e.g., apply for PRIN and national level projects and participate in COST and European level projects. On the other hand, their research work is evaluated in the individual (or department) level, merely based on their publication output with little focus on teaching duties and results of this evaluation then is aggregated in the department level and lead to the university rankings. Academics in Italy are paid to a relatively equal level based on academic status. This pay is not directly dependent on their research productivity. We consider this as a case of institutional ambiguity (Boffo and Moscati 1998) stemmed from the centralized leadership in Italian academia (Whitley 2007) that gives ambiguous signals to academics.

Since the academic world, and sociological community as well, has its own type of diversity and inequality, we attempted to look at gender and ivy-league affiliation effects [and biases] in publication behavior of academics in the case of top sociology journals. To see how these types of inequalities affect academics’ success in terms of the research productivity.

To fill this gap, we have prepared empirical evidences on different aspects. We refer especially to multi-level modelling which considered the nested and crossed membership structure and organizational embeddedness and policy effect evaluation with repeated measurements. At the same time, we tried to consider the network and community membership and evolution. Our challenge was to evaluate some of these different aspects in a sophisticated multi-level model (Lazega et al. 2008; Bu et al. 2018; Zhang et al. 2018; Zhang, Bu, and Ding 2016; Bellotti, Kronegger, and Guadalupi 2016). We wanted to control the interplay between the factors that each focused on a specific aspect of academic life. This design helped us evaluate different variables including individual scientist’s characteristics, substantive focus of research (e.g., applying text and semantic analysis), community structure and evolution and structural and homophily effect all together while controlling for temporal dimension and changes.

Of course, our study suffers from different limitations. In each chapter, we have tried to provide an overview on previous research and discuss our findings by presenting study limitations. We have tried to adopt a step by step approach to overcome some of these, while certain caveats are general and will be discussed in the final chapter.

References

Wuchty, Stefan, Benjamin F Jones, and Brian Uzzi. 2007. “The Increasing Dominance of Teams in Production of Knowledge.” Science 316 (5827): 1036–9.

Araújo, Eduardo B, Nuno AM Araújo, André A Moreira, Hans J Herrmann, and José S Andrade Jr. 2017. “Gender Differences in Scientific Collaborations: Women Are More Egalitarian Than Men.” PloS One 12 (5): e0176791.

Ioannidis, John P A, and Richard Klavans. 2018. “The scientists who publish a paper every five days.” Nature 2016: 2016–8.

Moed, H., De Bruin, and & Tijssen R. Nederhof A. 1991. “International Scientific Co-Operation and Awareness Within the European Community: Problems and Perspectives.” Scientometrics 21 (3): 291–311.

Kim, K. W. 2006. “Measuring International Research Collaboration of Peripheral Countries: Taking the Context into Consideration.” Scientometrics 66 (2): 231–40.

Zhou, Ping, and Lutz Bornmann. 2015. “An Overview of Academic Publishing and Collaboration Between China and Germany.” Scientometrics 102 (2): 1781–93.

Young, Cheryl D. 1995. “An Assessment of Articles Published by Women in 15 Top Political Science Journals.” PS: Political Science and Politics 28 (3): 525–33.

Lomperis, Ana Maria Turner. 1990. “Are Women Changing the Nature of the Academic Profession?” The Journal of Higher Education 61 (6): 643–77.

Smith, David, and Jonathan Adams. 2008. “Academics or Executives? Continuity and Change in the Roles of Pro-Vice-Chancellors 1.” Higher Education Quarterly 62 (4): 340–57.

Shepherd, Sue. 2017. “There’s a Gulf Between Academics and University Management – and It’s Growing - Last Accessed 16 September 2018.” https://www.theguardian.com/higher-education-network/2017/jul/27/theres-a-gulf-between-academics-and-university-management-and-its-growing.

Edwards, Marc A, and Siddhartha Roy. 2017. “Academic Research in the 21st Century: Maintaining Scientific Integrity in a Climate of Perverse Incentives and Hypercompetition.” Environmental Engineering Science 34 (1): 51–61.

Jonkers, K., and T. Zacharewicz. 2016. “Research Performance Based Funding Systems: A Comparative Assessment.” Publications Office of the European Union, Luxembourg, EUR 27837 EN. https://doi.org/10.2791/70120.

Leahey, Erin. 2016. “From Sole Investigator to Team Scientist: Trends in the Practice and Study of Research Collaboration.” Annual Review of Sociology 42.

Sandström, Ulf, and Peter Van den Besselaar. 2018. “Funding, Evaluation, and the Performance of National Research Systems.” Journal of Informetrics 12 (1): 365–84.

Mallapaty, Smriti. 2018. “Scientists Get More Bang for Their Buck If Given More Freedom - Last Accessed 15 September 2018.” https://www.natureindex.com/news-blog/scientists-get-more-bang-for-their-buck-if-given-more-freedom.

Dupps, William J, and J Bradley Randleman. 2012. “The Perils of the Least Publishable Unit.” Journal of Refractive Surgery 28 (9): 601–2.

Smolčić, Vesna Šupak. 2013. “Salami Publication: Definitions and Examples.” Biochemia Medica: Biochemia Medica 23 (3): 237–41.

Crane, Diana. 1972. “Invisible Colleges: Diffusion of Knowledge in Scientific Communities.” Chicago: University of Chicago Press.

Fister Jr, Iztok, Iztok Fister, and Matjaž Perc. 2016. “Toward the Discovery of Citation Cartels in Citation Networks.” Frontiers in Physics 4: 49.

Rijcke, Sarah de, Paul F Wouters, Alex D Rushforth, Thomas P Franssen, and Björn Hammarfelt. 2016. “Evaluation Practices and Effects of Indicator Use—a Literature Review.” Research Evaluation 25 (2): 161–69.

Marginson, Simon, and Marijk Van der Wende. 2007. “To Rank or to Be Ranked: The Impact of Global Rankings in Higher Education.” Journal of Studies in International Education 11 (3-4): 306–29.

Garcia, Rolando. 2018. “Gaming the System in Science - Last Accessed 16 September 2018.” https://ratioscientiae.weebly.com/ratio-scientiae-blog/gaming-the-system-in-science.

Casal, Gualberto Buela. 2014. “Pathological Publishing: A New Psychological Disorder with Legal Consequences?” The European Journal of Psychology Applied to Legal Context 6 (2): 91–97.

Fang, Ferric C, R Grant Steen, and Arturo Casadevall. 2012. “Misconduct Accounts for the Majority of Retracted Scientific Publications.” Proceedings of the National Academy of Sciences 109 (42): 17028–33.

Lamont, Michèle. 2009. How Professors Think. Harvard University Press.

Gill, R. 2009. “Breaking the Silence: The Hidden Injuries of Neo-Liberal Academia in R. Flood, & R. Gill,(Eds.), Secrecy and Silence in the Research Process: Feminist Reflections.” London: Routledge.

Hanlon, Shane M. 2016. “Managing My Fear of Missing Out.” Science 353 (6306): 1458–8.

Carson, Lydia, Christoph Bartneck, and Kevin Voges. 2013. “Over-Competitiveness in Academia: A Literature Review.” Disruptive Science and Technology 1 (4): 183–90.

Aitkenhead, Decca. 2013b. “Peter Higgs: I Wouldn’t Be Productive Enough for Today’s Academic System.” https://www.theguardian.com/science/2013/dec/06/peter-higgs-boson-academic-system.

Aitkenhead, Decca. 2013a. “Peter Higgs Interview: ’I Have This Kind of Underlying Incompetence’.” https://www.theguardian.com/science/2013/dec/06/peter-higgs-interview-underlying-incompetence.

Hargens, Lowell L. 2004. “What Is Mertonian Sociology of Science?” Scientometrics 60 (1): 63–70.

Wilsdon, J, L Allen, E Belfiore, P Campbell, S Curry, S Hill, R Jones, et al. 2015. “The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. HEFCE.”

Nederhof, Anton J. 2006. “Bibliometric Monitoring of Research Performance in the Social Sciences and the Humanities: A Review.” Scientometrics 66 (1): 81–100.

Leahey, Erin, Bruce Keith, and Jason Crockett. 2010. “Specialization and Promotion in an Academic Discipline.” Research in Social Stratification and Mobility 28 (2): 135–55.

Long, J Scott. 1992. “Measures of Sex Differences in Scientific Productivity.” Social Forces 71 (1): 159–78.

Grant, Linda, and Kathryn B Ward. 1991. “Gender and Publishing in Sociology.” Gender & Society 5 (2): 207–23.

Clauset, Aaron, Daniel B Larremore, and Roberta Sinatra. 2017. “Data-Driven Predictions in the Science of Science.” Science 355 (6324): 477–80.

Merton, Robert K. 1968. “The Matthew Effect in Science: The Reward and Communication Systems of Science Are Considered.” Science 159 (3810): 56–63.

Wang, Jian, Reinhilde Veugelers, and Paula Stephan. 2017. “Bias Against Novelty in Science: A Cautionary Tale for Users of Bibliometric Indicators.” Research Policy 46 (8): 1416–36.

Mejlgaard, Niels, Richard Woolley, Carter Bloch, Susanne Bührer, Erich Griessler, Angela Jäger, Ralf Lindner, et al. 2018. “Europe’s Plans for Responsible Science.” Science 361 (6404): 761–62.

Mejlgaard, Niels, Richard Woolley, Carter Bloch, Susanne Buehrer, Erich Griessler, Angela Jaeger, Ralf Lindner, et al. 2018. “A Key Moment for European Science Policy” 17 (September): 1–6.

DORA. 2012. “San Francisco Declaration on Research Assessment (Dora).” https://sfdora.org/read/.

Bibliomagician, The. 2018. “Research Evaluation: Things We Can Learn from the Dutch.” https://thebibliomagician.wordpress.com/2018/05/31/research-evaluation-things-we-can-learn-from-the-dutch/.

Baccini, Alberto, and Giuseppe De Nicolao. 2016. “Do They Agree? Bibliometric Evaluation Versus Informed Peer Review in the Italian Research Assessment Exercise.” Scientometrics 108 (3): 1651–71.

Ancaiani, Alessio, Alberto F Anfossi, Anna Barbara, Sergio Benedetto, Brigida Blasi, Valentina Carletti, Tindaro Cicero, et al. 2015. “Evaluating Scientific Research in Italy: The 2004–10 Research Evaluation Exercise.” Research Evaluation 24 (3): 242–55.

Bertocchi, Graziella, Alfonso Gambardella, Tullio Jappelli, Carmela A Nappi, and Franco Peracchi. 2015. “Bibliometric Evaluation Vs. Informed Peer Review: Evidence from Italy.” Research Policy 44 (2): 451–66.

Hicks, Diana, Paul Wouters, Ludo Waltman, Sarah De Rijcke, and Ismael Rafols. 2015. “The Leiden Manifesto for Research Metrics.” Nature 520 (7548): 429.

Abramo, Giovanni, and Ciriaco Andrea D’Angelo. 2011a. “Evaluating Research: From Informed Peer Review to Bibliometrics.” Scientometrics 87 (3): 499–514.

Waltman, L, and N Jan Van Eck. 2016. “The Need for Contextualized Scientometric Analysis: An Opinion Paper.” In 21st International Conference on Science and Technology Indicators-Sti 2016. Book of Proceedings.

Bornmann, Lutz. 2013. “What Is Societal Impact of Research and How Can It Be Assessed? A Literature Survey.” Journal of the Association for Information Science and Technology 64 (2): 217–33.

Wouters, Paul, and Rodrigo Costas. 2012. Users, Narcissism and Control: Tracking the Impact of Scholarly Publications in the 21st Century. SURFfoundation Utrecht.

ACUMEN. 2010. “Academic Careers Understood Through Measurement and Norms (Acumen) - Last Accessed 16 September 2018.” http://research-acumen.eu.

Way, Samuel F, Allison C Morgan, Aaron Clauset, and Daniel B Larremore. 2016. “The Misleading Narrative of the Canonical Faculty Productivity Trajectory.” arXiv Preprint arXiv:1612.08228.

Cole, Jonathan R, and Harriet Zuckerman. 1987. “Marriage, Motherhood and Research Performance in Science.” Scientific American 256 (2): 119–25.

Shapin, Steven. 2009. The Scientific Life: A Moral History of a Late Modern Vocation. University of Chicago Press.

Abramo, Giovanni, Ciriaco Andrea D’Angelo, and Alessandro Caprasecca. 2009a. “Allocative Efficiency in Public Research Funding: Can Bibliometrics Help?” Research Policy 38 (1): 206–15.

Harzing, Anne-Wil. 2018. “Running the Ref on a Rainy Sunday Afternoon: Can We Exchange Peer Review for Metrics?” In 23rd International Conference on Science and Technology Indicators (Sti 2018), September 12-14, 2018, Leiden, the Netherlands. Centre for Science; Technology Studies (CWTS).

Traag, Vincent, and Ludo Waltman. 2018. “Systematic Analysis of Agreement Between Metrics and Peer Review in the Uk Ref.” arXiv Preprint arXiv:1808.03491.

Sivertsen, Gunnar. 2017. “Unique, but Still Best Practice? The Research Excellence Framework (Ref) from an International Perspective.” Palgrave Communications 3: 17078.

Sivertsen, Gunnar. 2018. “Why Has No Other European Country Adopted the Research Excellence Framework?” http://blogs.lse.ac.uk/impactofsocialsciences/2018/01/16/why-has-no-other-european-country-adopted-the-research-excellence-framework/.

Sonnenwald, Diane H. 2007. “Scientific Collaboration.” Annual Review of Information Science and Technology 41 (1): 643–81.

Small, Mario Luis. 2017. Someone to Talk to. Oxford University Press.

Granovetter, Mark S. 1977. “The Strength of Weak Ties.” In Social Networks, 347–67. Elsevier.

Musselin, Christine. 2008. “Towards a Sociology of Academic Work.” In From Governance to Identity, 47–56. Springer.

Lamont, Michèle. 2017. “Prisms of Inequality: Moral Boundaries, Exclusion, and Academic Evaluation.” In Praemium Erasmianum Essay 2017. Amsterdam: Praemium Erasmianum Foundation; Praemium Erasmianum Foundation.

Kalfa, Senia, Adrian Wilkinson, and Paul J Gollan. 2018. “The Academic Game: Compliance and Resistance in Universities.” Work, Employment and Society 32 (2): 274–91.

Babchuk, Nicholas, Bruce Keith, and George Peters. 1999. “Collaboration in Sociology and Other Scientific Disciplines: A Comparative Trend Analysis of Scholarship in the Social, Physical, and Mathematical Sciences.” The American Sociologist 30 (3): 5–21.

Zeng, An, Zhesi Shen, Jianlin Zhou, Jinshan Wu, Ying Fan, Yougui Wang, and H Eugene Stanley. 2017. “The Science of Science: From the Perspective of Complex Systems.” Physics Reports.

Blume, Stuart S. 1987. “The Social Direction of the Public Sciences: Causes and Consequences of Co-Operation Between Scientists and Non-Scientific Groups.”

Ritzer, George. 2004. Encyclopedia of Social Theory. Sage publications.

Frank, Robert H, and Philip J Cook. 2010. The Winner-Take-All Society: Why the Few at the Top Get so Much More Than the Rest of Us. Random House.

Sheltzer, Jason M, and Joan C Smith. 2014. “Elite Male Faculty in the Life Sciences Employ Fewer Women.” Proceedings of the National Academy of Sciences 111 (28): 10107–12.

Moody, James. 2004. “The Structure of a Social Science Collaboration Network: Disciplinary Cohesion from 1963 to 1999.” American Sociological Review 69 (2): 213–38.

Newman, Mark EJ. 2001a. “Scientific Collaboration Networks. I. Network Construction and Fundamental Results.” Physical Review E 64 (1): 016131.

Newman, Mark EJ. 2001b. “The Structure of Scientific Collaboration Networks.” Proceedings of the National Academy of Sciences 98 (2): 404–9.

Zhang, Chenwei, Yi Bu, Ying Ding, and Jian Xu. 2018. “Understanding Scientific Collaboration: Homophily, Transitivity, and Preferential Attachment.” Journal of the Association for Information Science and Technology 69 (1): 72–86.

Hou, Haiyan, Hildrun Kretschmer, and Zeyuan Liu. 2007. “The Structure of Scientific Collaboration Networks in Scientometrics.” Scientometrics 75 (2): 189–202.

Sciabolazza, Valerio Leone, Raffaele Vacca, Therese Kennelly Okraku, and Christopher McCarty. 2017. “Detecting and Analyzing Research Communities in Longitudinal Scientific Networks.” PloS One 12 (8): e0182516.

Bellotti, Elisa, Luka Kronegger, and Luigi Guadalupi. 2016. “The Evolution of Research Collaboration Within and Across Disciplines in Italian Academia.” Scientometrics 109 (2): 783–811. https://doi.org/10.1007/s11192-016-2068-1.

Abbott, Andrew. 2000. “Reflections on the Future of Sociology.” Contemporary Sociology 29 (2): 296–300.

Watts, Duncan J, and Steven H Strogatz. 1998. “Collective Dynamics of ‘Small-World’networks.” Nature 393 (6684): 440.

Abbott, Andrew. 2001. Chaos of Disciplines. University of Chicago Press.

Kronegger, Luka, Anuška Ferligoj, and Patrick Doreian. 2011. “On the Dynamics of National Scientific Systems.” Quality & Quantity 45 (5): 989–1015.

Light, Ryan, and James Moody. 2011. “Dynamic Building Blocks for Science: Comment on Kronegger, Ferligoj, and Doreian.” Quality & Quantity 45 (5): 1017.

Doreian, Patrick, Vladimir Batagelj, and Anuska Ferligoj. 2005. Generalized Blockmodeling. Vol. 25. Cambridge university press.

Newman, Mark EJ. 2006. “Modularity and Community Structure in Networks.” Proceedings of the National Academy of Sciences 103 (23): 8577–82.

Mucha, Peter J, Thomas Richardson, Kevin Macon, Mason A Porter, and Jukka-Pekka Onnela. 2010. “Community Structure in Time-Dependent, Multiscale, and Multiplex Networks.” Science 328 (5980): 876–78.

Reda, Khairi, Chayant Tantipathananandh, Andrew Johnson, Jason Leigh, and Tanya Berger-Wolf. 2011. “Visualizing the Evolution of Community Structures in Dynamic Social Networks.” In Computer Graphics Forum, 30:1061–70. 3. Wiley Online Library.

Solla Price, D. J., and D. Beaver. 1966. “Collaboration in an Invisible College.” American Psychologist 21 (11): 1011.

Lizardo, Omar, Dustin S Stoltz, Marshall A Taylor, and Michael Lee Wood. 2018. “Visualizing Bring-Backs.” Socius 4: 2378023118805362.

Durkheim, Emile. 1893. “The Division of Labor in Society.”

Granovetter, Mark. 1985. “Economic Action and Social Structure: The Problem of Embeddedness.” American Journal of Sociology 91 (3): 481–510.

Boffo, Stefano, and Roberto Moscati. 1998. “Evaluation in the Italian Higher Education System: Many Tribes, Many Territories... Many Godfathers.” European Journal of Education 33 (3): 349–60.

Bianchi, Federico, Francisco Grimaldo, Giangiacomo Bravo, and Flaminio Squazzoni. 2018. “The Peer Review Game: An Agent-Based Model of Scientists Facing Resource Constraints and Institutional Pressures.” Scientometrics, July. https://doi.org/10.1007/s11192-018-2825-4.

Whitley, Richard. 2007. “Changing Governance of the Public Sciences.” In The Changing Governance of the Sciences, 3–27. Springer.

Lazega, Emmanuel, Marie-Thérèse Jourda, Lise Mounier, and Rafaël Stofer. 2008. “Catching up with Big Fish in the Big Pond? Multi-Level Network Analysis Through Linked Design.” Social Networks 30 (2): 159–76.

Bu, Yi, Ying Ding, Xingkun Liang, and Dakota S Murray. 2018. “Understanding Persistent Scientific Collaboration.” Journal of the Association for Information Science and Technology 69 (3): 438–48.

Zhang, Chenwei, Yi Bu, and Ying Ding. 2016. “Understanding Scientific Collaboration from the Perspective of Collaborators and Their Network Structures.” IConference 2016 Proceedings.