Skip to main content

Remember to login before you can view the full description of a study, submit a study, or post topics to the discussion forum.

Sort by

Over the last decade, growing emphasis has been placed on research funding and performing organisations to effectively demonstrate how their use of public and donated funds benefits the research ecosystem and wider society. Focus on greater accountability, transparency, and adding value has necessitated that research organisations develop and optimise robust methods and approaches to evaluating the outcomes and impacts of their research programmes, fellowships, and other types of funding investments. Moreover, there is emphasis on improving how research data is collected, used and reported, and that the methods used capture a broad and accurate picture of outcomes and impacts from funding. Several reviews of models and applications for research evaluations exist, but these are either limited to health research, provide an overview of approaches, or only capture research impact assessments. A broader scoping review of methods and approaches to evidencing both outcomes and impacts from different types of research investments is now needed and should include critical analysis of the available frameworks and methods, and the types of outcomes and impacts these can measure.

The past decade has seen efforts to digitalise, standardise and automate research processes, with particular emphasis on core organisational activities such as management of research data, administration, and planning. Artificial Intelligence (AI) could offer some useful insights with regards to widely encountered pressures in research, such as increasing administrative demand, growing data requirements, and tighter regulations on research activities. Technological advances, including AI, are becoming widely used in commercial and business sectors but remain to be harnessed in research, with funders and institutions only now beginning to explore the utility of AI, and investing in AI to determine how they can ethically benefit from these technological advances. With a recent increase in the use of generative AI tools, such as ChatGPT, in academic writing and grant applications, organisations in the UK are beginning to respond and calling for evidence on the benefits and drawbacks of using AI for research-focused activities. There is a need for better understanding of what AI is, what it isn't, its implications for funders, and where it may have potential benefit for funders' administrative and research management (RMA) processes.

Digital methods (hereby referred to as digital) use technology (such as mobile applications) to allow participants to take part in research from their homes. However, it is well known that some groups are under-served in research, and it is these same groups that are most at risk of not being able to fully take part in research that uses digital.
If digital is a barrier to trial participation in different groups, then research findings will not represent their needs and they may suffer poorer health outcomes.

The purpose of this project is to record real-life experiences of members of the public that have used, or chosen not to use, digital, to understand what can be done to make it easier for those wanting to use it and to explore solutions to improve digital for different groups wanting to take part in the research.

Added: December 14, 2023

Updated: December 12, 2025

The HRB is fully committed to the DORA principles that research and researchers should be assessed on the merit of their research as a whole and not solely on journal-based metrics, and that the value and impact of all research outputs be considered. As signatory of the DORA declaration and the Coalition for Advancing Research Assessment (COARA), the HRB support a research environment where importance is placed on the intrinsic value and relevance of research and its potential impact in society .
Since 2016 the HRB has been using a narrative-like CV, now referred as HRB Career Track CV, in its research career funding schemes, where the people is at the core. We are committed to the fair assessment of researchers based on the merits of their contribution as a whole.
The use of a narrative-like CV is an opportunity to reduce the massive influence of journal-based indicators in grant reviews and to promote a more holistic assessment which recognises societal outcomes of research in addition to the generation of knowledge.
The HRB is also assessing the experience of the users by surveying applicants, mentors, where applicable, and reviewers in different career’s schemes. To date, we have conducted two rounds of assessment of user experience.
• Round 1 Surveys 2021-2022: What are the results of the first round of the users’ experience?
• Rounds 2 Surveys 2023-2024: The second round, which is based on the Joint Funders Group shared evaluation framework, is currently underway. This second survey includes an additional module which collects basic demographic and EDI variables of those surveyed, which will enable us to glean richer information within and across different respondents. The findings of the second round of surveys are expected in late 2024.

This study (funded by a Research England Enhancing Research Culture Fund) obtains qualitative insight into the work patterns and work-life balance of staff who undertake any form of research at the University of Southampton. Participants are invited to voluntarily take part in an online focus group and/ or an online one-to-one interview and are recruited from all faculties using a stratified, non-random sample. To date [November 2023], 50 participants have taken part in one-to-one interviews and 15 people have taken part in focus groups. This study will be ongoing until August 2024.

Title:‎ UK Researcher views on Blinding in Complex Intervention Randomised Controlled Trials: A ‎Survey of UKCRC and TMRP Researchers
Background:‎
Blinding, the practice of concealing treatment allocation, plays a crucial role in ‎randomized controlled trials (RCTs). However, in the context of complex ‎interventions, its feasibility and impact on study quality have been subjects of ‎debate. This survey aims to capture the perceptions of researchers affiliated with the ‎UK Clinical Research Collaboration (UKCRC) and the Trial Methodology Research Partnership (TMRP) ‎ in the United Kingdom regarding blinding in complex ‎intervention RCTs.‎

Methods:‎
This cross-sectional study employed an online survey comprising 30 questions. ‎Utilizing a structured questionnaire, we targeted the population of UKCRC and ‎TMRP researchers. The Likert scale was employed to assess respondents' views, ‎ranging from "Strongly Disagree" to "Strongly Agree." In addition to quantitative ‎responses, a free-text box allowed for qualitative input. The survey collected data ‎from a substantial sample size of UKCRC and TMRP researchers and employed ‎descriptive statistical analyses.‎

Results:‎
Our findings indicate a diverse spectrum of perceptions among UKCRC and TMRP ‎researchers regarding blinding in complex intervention RCTs. While many ‎respondents acknowledged the challenges inherent in blinding within this context, ‎responses varied in terms of its perceived importance, influence on securing ‎funding, and alignment with critical appraisal tools. Notably, the recent NIHR-MRC ‎Framework (2021) emerged as a focal point in shaping attitudes toward blinding.‎

Conclusion:‎
This survey presents interesting researchers' perceptions ‎regarding blinding in complex intervention RCTs. The results emphasise the ‎necessity for continued dialogue within the research community to address ‎methodological challenges and adapt guidelines to better suit the complex ‎intervention context. A comprehensive understanding of these perceptions has the ‎potential to enhance the quality and relevance of clinical research in this evolving ‎field.‎

This research aims to understand practitioners and decision makers engagement with, understanding and perceived impact of NIHR Evidence outputs (Alerts and Collections) and explore how they might be altered to enhance their reach, engagement and impact.
We will explore:
Reach: awareness of the outputs (how do we improve awareness of evidence outputs?)
Engagement: relevance, accessibility and understanding of the outputs for each audience (how relevant, accessible and understandable are evidence outputs?)
Impact: perceived value of the outputs for enhancing knowledge or informing decision making (what is the value of evidence outputs for improving knowledge or decision making?)
Methods: Think Aloud interviews will be conducted with 30-40 practitioners and decision makers as they read a selection of NIHR evidence Alerts and Collections. Data will be analysed thematically

Background
External randomised pilot trials aim to assess whether a future definitive randomised controlled trial (RCT) is feasible. Pre-specified progression criteria help guide the interpretation of pilot trial findings to decide whether, and how, a definitive trial should be conducted. We aimed to examine how researchers report and plan to assess progression criteria in external pilot trial funding applications submitted to the NIHR Research for Patient Benefit Programme.

Methods
We conducted a cross-sectional study of progression criteria inclusion in Stage 1 (outline) and corresponding Stage 2 (full) funding applications for external randomised external pilot trials submitted to NIHR RfPB between July 2017 and July 2019.

Results
Of the 100 Stage 1 outline applications assessed, 95 were eligible for inclusion (of these, 52 were invited to Stage 2 full application; 43 were rejected) and 49/52 were eligible for inclusion at Stage 2 full application (of these, 35 were awarded funding; 14 were rejected). Over half of applications assessed at Stage 1 (48/95, 51%), and 73% of those assessed at Stage 2 (36/49) included progression criteria in their research plans. Progression criteria were most often reported in a stop-go format, often with additional specified factors that should be considered when determining feasibility (Stage 1 33/48, 69%; Stage 2 21/36, 58%). Recruitment and retention were the most frequent indicators of feasibility to inform progression criteria. One-third of applications provided some justification or rationale for their targets (Stage 1 16/48, 33%; Stage 2 12/36, 33%). Funding committee feedback mentioned progression criteria in over 20% of applications (Stage 1 22/95, 23%; Stage 2 11/49, 22%) to either request the addition of progression criteria or provide justification for the criteria stipulated.

Conclusions
Our findings indicate that researchers do not always include progression criteria in external randomised pilot trial applications submitted to research funders. This can result in a lack of transparency in the assessment of randomised pilot trial feasibility.

Background
Research funders use a wide variety of application assessment processes yet there is little evidence on their relative advantages and disadvantages. A broad distinction can be made between processes with a single stage assessment of full proposals and those that first invite an outline, with full proposals invited at a second stage only for those which are shortlisted. This paper examines the effects of changing from a one-stage to a two-stage process within the UK’s National Institute for Health Research’s (NIHR) Research for Patient Benefit (RfPB) Programme which made this change in 2015.

Methods
A retrospective comparative design was used to compare eight one-stage funding competitions (912 applications) with eight two-stage funding competitions (1090 applications). Comparisons were made between the number of applications submitted, number of peer and lay reviews required, the duration of the funding round, average external peer review scores, and the total costs involved.

Results
There was a mean number of 114 applications per funding round for the one-stage process and 136 for the two-stage process. The one-stage process took a mean of 274 days and the two-stage process 348 days to complete, although those who were not funded (i.e. the majority) were informed at a mean of 195 days (mean 79 days earlier) under the two-stage process. The mean peer review score for full applications using the one-stage process was 6.46 and for the two-stage process 6.82 (5.6% difference using a 1–10 scale (with 10 being the highest), but there was no significant difference between the lay reviewer scores. The one-stage process required a mean of 423 peer reviews and 102 lay reviewers and the two-stage process required a mean of 208 peer reviews and 50 lay reviews (mean difference of 215 peer reviews and 52 lay reviews) per funding round. Overall cost per funding round changed from £148,908 for the one-stage process to £105,342 for the two-stage process saving approximately £43,566 per round.

Conclusion
We conclude that a two-stage application process increases the number of applications submitted to a funding round, is less burdensome and more efficient for all those involved with the process, is cost effective and has a small increase in peer reviewer scores. For the addition of fewer than 11 weeks to the process substantial efficiencies are gained which benefit funders, applicants and science. Funding agencies should consider adopting a two-stage application assessment process.

Background
Feasibility studies are often conducted before committing to a randomised controlled trial (RCT), yet there is little published evidence to inform how useful feasibility studies are, especially in terms of adding or reducing waste in research. This study attempted to examine how many feasibility studies demonstrated that the full trial was feasible and whether some feasibility studies were inherently likely to be feasible or not feasible, based on the topic area and/or research setting.

Methods
Keyword searches were conducted on the International Standard Randomised Controlled Trials Number (ISRCTN) registry to identify all completed feasibility studies which had been conducted in the UK.

Results
A total of 625 records from the 1933 identified were reviewed before it became evident that it would be futile to continue. Of 329 feasibility studies identified, 160 had a known outcome (49%), 133 (83%) trials were deemed to be feasible and only 27 (17%) were reported to be non-feasible. There were therefore too few studies to allow the intended comparison of differences in non-feasible studies by topic and/or setting.

Conclusions
There were too few studies reported as non-feasible to draw any useful conclusions on whether topic and/or setting had an effect. However, the high feasibility rate (83%) may suggest that non-feasible studies are subject to publication bias or that many feasible studies are redundant and may be adding waste to the research pathway.