Skip to main content

Remember to login before you can view the full description of a study, submit a study, or post topics to the discussion forum.

Sort by

Background: Academic sabbaticals are seen as an important aspect of academic life and require considerable resources; however, little research has been done into how they are used and whether their effects can be measured. We explored these issues at the University of Cambridge.
Methods: A mixed method approach including 24 interviews with academics, 8 interviews with administrators; alongside analysis of administrative and publication data between 2010 and 2019. 
Results: Academics underline the importance of sabbaticals in providing uninterrupted time for research that is used to think, explore new ideas, master new techniques, develop new collaborations, draw together previous work, set work in a wider context, and provide personal discretion in research direction. They also highlight sabbaticals’ contributions in allowing the beneficial effects of combining teaching and research while mitigating some of the disadvantages. However, it is difficult to detect the effect of sabbaticals on publications using a time series approach.
Conclusions: Sabbaticals provide manifold contributions to academic research at the University of Cambridge; however, detecting and quantifying this contribution, and extending these findings requires wider and more detailed investigation.

In 2015, the updated Medical Research Council (MRC) guidance emphasised the importance of conducting qualitative studies alongside RCTs and provided guidance to help researchers undertake this work. The MRC guidance along with methodological innovations such as the QuinteT Recruitment intervention, which uses mixed methods approaches to develop strategies for optimising recruitment to RCTs have highlighted the value of conducting qualitative research alongside trials.
There is now a large body of qualitative evidence which reports on the acceptability of individual interventions and/or the common barriers and facilitators to designing and delivering RCTs. These studies provide valuable, in-depth insights surrounding the reasons why trials may or may not be ‘successful’ both in terms of an intervention’s delivery, implementation and acceptability and the design and conduct of the trial itself. However, qualitative findings are often published separately to the results of the main trial as standalone chapters within funder reports and/or as separate journal articles. As a result, the extent that the qualitative evidence that is generated during a trial is used to aid the interpretation of trial findings and/or a trial’s design and delivery is uncertain.
In this study we will map and describe how qualitative research is being used alongside currently funded National Institute for Health Research RCTs. We will include any RCT that was published during 2021 in the Health Technology Assessment (HTA), Programme Grants for Applied Research (PGfAR) and Public Health Research (PHR) journals library as these are the main funders of health-related RCTs in the UK. We will use the advanced search page on the NIHR Journals library to identify relevant reports. Any RCT, published in 2021 in the HTA, PGfAR and PHR journals library will be eligible for inclusion so long as it includes a qualitative study (e.g. qualitative sub-studies, qualitative process evaluations, qualitative components of mixed methods process evaluations) that uses established methods of qualitative data collection (e.g. interviews) and analysis (e.g. thematic analysis). Two researchers will independently assess the reports in the database and undertake data extraction. Key information to be extracted may include: RCT study characteristics (aims, design, patient characteristics); qualitative study characteristics (aims, method of data collection and analysis); participant characteristics of included qualitative studies (e.g. numbers of participants, stakeholder groups included, whether underserved populations are represented); Details of how qualitative findings are reported (i.e. as separate chapters or as mixed methods chapters); Details of how qualitative findings have informed the design, conduct, or interpretation of trial findings (e.g. how did qualitative findings inform the design of a future trial when conducted during feasibility trials).
At the time of registration, preliminary searches had been conducted and the study protocol was in development.

Added: August 31, 2021

Updated: December 12, 2025

Background Research is fundamental to improving quality of care. Although research traditionally has impact through implementation into routine care, there is increasing interest in whether participation in research itself drives better performance. The bulk of patient contacts are through general practice. If research participation improves outcomes, achieving those benefits through general practice could improve population health. However, the evidence that research activity improves outcomes mostly comes from secondary care. The same benefits may not occur in general practice. There is a need to use the wealth of data on research activity and general practice outcomes to provide a comprehensive analysis of the relationship between research activity and patient care.

Added: July 19, 2021

Updated: December 12, 2025

Single Sided Deafness (SSD) refers to the condition where there is normal or near-normal hearing in one ear and a severe to profound hearing impairment in the other ear.

Good hearing in both ears is important for everyday listening tasks such as understanding speech in noisy environments, locating where sounds are, and identifying threats such as oncoming traffic.

Researchers don’t yet agree on what benefits and harms (known as ‘outcomes’) should always be assessed when evaluating whether or not an intervention for SSD is effective. These inconsistencies hinder progress to find the most effective intervention.

Added: June 30, 2021

Updated: December 12, 2025

The Research Investments in Global Health (RESIN) study is an analysis of global investments in health research, and is based at the University of Southampton.
We analyses funding decisions and look at knowledge strengths and research gaps. For example, see the 2020 paper covering $105 billion of public and charitable research investment around global infectious disease research. https://www.thelancet.com/journals/langlo/article/PIIS2214-109X(20)30357-0/fulltext
Ongoing work includes global research funding of oncology research and initial look at the impact of the pandemic on cancer funding. We are also bringing together 'all health research globally' to perform an analysis of the spend on each key area of health, over time, and to ascertain where strengths and gaps are going forward. Our data is used and absorbed by the WHO R&D Observatory, and also funding bodies including Wellcome, Gates Foundation and the NIHR.

Funders of health research internationally are adopting the concept of involving the public and patients into decision making within their organisation and/or within funded research initiatives (Public and Patient Involvement, PPI). To what extent and through what mechanisms a health research funder should do this has become a pertinent business question. This project was undertaken in 2016 to provide recommendations to the Health Research Board about which potential PPI interventions to prioritise.
The project explored the main concepts around PPI and analysed the international PPI experience in three countries with well-established PPI supports (UK, USA and Canada), followed by mapping of the emerging Irish PPI landscape. This information fed into the design of a survey targeting two audiences, health researchers and members of the public/patients, exploring what the HRB should so to support PPI. A total of 391 people responded to the survey, with 242 completing it.
Survey results showed that researchers had a clear preference for the HRB to provide practical support for their PPI endeavours. Awareness raising was seen as important, which is corroborated by the observation that not all respondents appear to share the same definition of PPI, a phenomenon observed internationally. Matchmaking between researchers and PPI contributors was of medium importance, as was the option for the HRB to emphasise PPI throughout award selection and management. Training needs for researchers and for PPI contributors scored medium-low. The least favoured option was to include public reviewers into selection panels for applications.
Two thirds of responding researchers claimed to have previously employed PPI within their research. This appears high and might reflect a self-selected sample and some ambiguity regarding the definition of PPI. 89% of researchers were satisfied or very satisfied with previous PPI experiences, and virtually all (147/148) intended to integrate PPI activities into their research in the future, providing a strong foundation for PPI interventions.
Main messages from the public/patient survey included a mismatch between research carried out and research seen as important by respondents. This has been documented internationally and provides a challenge for funders. Respondents indicated interest in activities linked to individual projects such as proofreading patient materials and inputting into the design of the research protocol. Half of the respondents considered working with a research funder to review the PPI aspect of applications, indicating that this would be a feasible option for the HRB. There was a high willingness to be involved in research in the future, and a mostly positive reflection on previous PPI experience, although not as positive as the researchers’ assessment.
The main recommendation arising for the HRB is to set out a 4-year plan with two parts: planned funding initiatives to implement the recommendations, and a planned gradual change in ways of working to strengthen the PPI aspect in applications and award management.

Added: June 8, 2021

Updated: December 12, 2025

To test how process within Identification and Prioritisation can be improved.

Trials are one of the best ways of testing treatments but they can be expensive and time consuming. The amount of data collected has a big influence on both cost and time.       

We aim to understand how much time trial teams spend collecting the most important trial data (called primary outcomes) compared to the other data they collect (secondary outcomes).  Outcomes are things like pain, blood pressure, or weight.  Small-scale work suggests that trial teams spend most of their time on the less important outcomes.  Our proposed large-scale work will find out whether this is correct.  We also want to understand the time taken to collect core outcome sets–an agreed minimum amount of information–compared with trials that do not use them to see if they improve efficiency, or worsen it.    

Once we have the above, we will speak with trial teams and others involved in trials to understand what will help them to plan and fund their work more efficiently and also to develop guidance trial teams can use in the future.  We hope our results will make it more likely that time isn’t given to less important outcomes at the expense of the most important. 

Added: March 30, 2021

Updated: December 12, 2025

Peer review is an integral part of decision-making processes to effectively allocate funding. However, concerns are consistently being raised about the bias, burden and reliability of peer review. In response, the NIHR Push the Pace-II interview project was conducted in 2016, to increase knowledge about the peer review process from perspectives of applicants, peer reviewers, funding committee members and NIHR staff. Although three themes associated with strengths, challenges and improvements to peer review were presented, further insights from this data could be elicited by using an alternative approach to the analysis.
This study will use an inductive (data-driven) thematic analysis approach to re-analyse anonymised interview data collected as part of the Push the Pace-II work to generate new themes and knowledge about peer review processes from the perspective of all key stakeholders who would be affected by any changes to decision-making practices (e.g. What are the motivations to peer review grant proposals? What are the expectations of reviewers when conducting peer reviews?). Understanding stakeholder expectation about the peer review process will provide essential information about the consequences of modifying and changing the peer review processes, for example impacts to reviewer recruitment and retention, or how peer review is implemented within the decision-making practice.

Objective: This study investigated the content, quality and value of feedback given to applicants who applied to one of four research programmes in the UK funded (or jointly funded) by the National Institute for Health Research (NIHR).
Design/setting: Document analysis and an online survey.
Participants: NIHR applicant feedback documents comprised written feedback from Stage 1 and Stage 2 funding committees and external peer reviewers, and NIHR applicants.
Methods: A mixed-methods phased approach was conducted. Phase 1 examined 114 feedback documents and developed a conceptual framework of the key components of feedback using content analysis. Phase 2 was an online survey completed by 113 NIHR applicants. Frequencies of responses to closed questions were calculated. Perceptions of quality and value of feedback were identified using content analysis of open-text responses.
Results: In phase 1, a conceptual framework was developed with seven overarching categories: ‘Study structure and quality’; ‘Team and infrastructure’; ‘Acceptability to patients and professionals’; ‘Study justification and design’; ‘Risks and contingencies’; ‘Outputs’; ‘Value for money’. A higher frequency of feedback was provided at Stage 2 and for successful applications across the majority of components. In phase 2, frequency data showed that opinion on feedback was dependent on funding outcome. Content analysis revealed four main themes: ‘Committee transparency’; ‘Content validity and reliability’; ‘Additional support’; Recognition of effort and constraints’.
Conclusions: This study provides key insights and understanding into the quality, content, and value of feedback provided to NIHR applicants. The study identified key areas for improvement that can arise in NIHR funding applications, as well as in the feedback given to applicants that are applicable to other funding organisations. These findings could be used to inform funding application guidance documents to help researchers strengthen their applications and used more widely by other funders to inform their feedback processes.