Remember to login before you can view the full description of a study, submit a study, or post topics to the discussion forum.
Added: September 14, 2023
Updated: January 31, 2024
This research aims to understand practitioners and decision makers engagement with, understanding and perceived impact of NIHR Evidence outputs (Alerts and Collections) and explore how they might be altered to enhance their reach, engagement and impact.
We will explore:
Reach: awareness of the outputs (how do we improve awareness of evidence outputs?)
Engagement: relevance, accessibility and understanding of the outputs for each audience (how relevant, accessible and understandable are evidence outputs?)
Impact: perceived value of the outputs for enhancing knowledge or informing decision making (what is the value of evidence outputs for improving knowledge or decision making?)
Methods: Think Aloud interviews will be conducted with 30-40 practitioners and decision makers as they read a selection of NIHR evidence Alerts and Collections. Data will be analysed thematically
Added: July 6, 2023
Updated: July 6, 2023
Background
External randomised pilot trials aim to assess whether a future definitive randomised controlled trial (RCT) is feasible. Pre-specified progression criteria help guide the interpretation of pilot trial findings to decide whether, and how, a definitive trial should be conducted. We aimed to examine how researchers report and plan to assess progression criteria in external pilot trial funding applications submitted to the NIHR Research for Patient Benefit Programme.
Methods
We conducted a cross-sectional study of progression criteria inclusion in Stage 1 (outline) and corresponding Stage 2 (full) funding applications for external randomised external pilot trials submitted to NIHR RfPB between July 2017 and July 2019.
Results
Of the 100 Stage 1 outline applications assessed, 95 were eligible for inclusion (of these, 52 were invited to Stage 2 full application; 43 were rejected) and 49/52 were eligible for inclusion at Stage 2 full application (of these, 35 were awarded funding; 14 were rejected). Over half of applications assessed at Stage 1 (48/95, 51%), and 73% of those assessed at Stage 2 (36/49) included progression criteria in their research plans. Progression criteria were most often reported in a stop-go format, often with additional specified factors that should be considered when determining feasibility (Stage 1 33/48, 69%; Stage 2 21/36, 58%). Recruitment and retention were the most frequent indicators of feasibility to inform progression criteria. One-third of applications provided some justification or rationale for their targets (Stage 1 16/48, 33%; Stage 2 12/36, 33%). Funding committee feedback mentioned progression criteria in over 20% of applications (Stage 1 22/95, 23%; Stage 2 11/49, 22%) to either request the addition of progression criteria or provide justification for the criteria stipulated.
Conclusions
Our findings indicate that researchers do not always include progression criteria in external randomised pilot trial applications submitted to research funders. This can result in a lack of transparency in the assessment of randomised pilot trial feasibility.
Added: July 6, 2023
Updated: July 6, 2023
Background
Research funders use a wide variety of application assessment processes yet there is little evidence on their relative advantages and disadvantages. A broad distinction can be made between processes with a single stage assessment of full proposals and those that first invite an outline, with full proposals invited at a second stage only for those which are shortlisted. This paper examines the effects of changing from a one-stage to a two-stage process within the UK’s National Institute for Health Research’s (NIHR) Research for Patient Benefit (RfPB) Programme which made this change in 2015.
Methods
A retrospective comparative design was used to compare eight one-stage funding competitions (912 applications) with eight two-stage funding competitions (1090 applications). Comparisons were made between the number of applications submitted, number of peer and lay reviews required, the duration of the funding round, average external peer review scores, and the total costs involved.
Results
There was a mean number of 114 applications per funding round for the one-stage process and 136 for the two-stage process. The one-stage process took a mean of 274 days and the two-stage process 348 days to complete, although those who were not funded (i.e. the majority) were informed at a mean of 195 days (mean 79 days earlier) under the two-stage process. The mean peer review score for full applications using the one-stage process was 6.46 and for the two-stage process 6.82 (5.6% difference using a 1–10 scale (with 10 being the highest), but there was no significant difference between the lay reviewer scores. The one-stage process required a mean of 423 peer reviews and 102 lay reviewers and the two-stage process required a mean of 208 peer reviews and 50 lay reviews (mean difference of 215 peer reviews and 52 lay reviews) per funding round. Overall cost per funding round changed from £148,908 for the one-stage process to £105,342 for the two-stage process saving approximately £43,566 per round.
Conclusion
We conclude that a two-stage application process increases the number of applications submitted to a funding round, is less burdensome and more efficient for all those involved with the process, is cost effective and has a small increase in peer reviewer scores. For the addition of fewer than 11 weeks to the process substantial efficiencies are gained which benefit funders, applicants and science. Funding agencies should consider adopting a two-stage application assessment process.
Added: July 6, 2023
Updated: July 6, 2023
Background
Feasibility studies are often conducted before committing to a randomised controlled trial (RCT), yet there is little published evidence to inform how useful feasibility studies are, especially in terms of adding or reducing waste in research. This study attempted to examine how many feasibility studies demonstrated that the full trial was feasible and whether some feasibility studies were inherently likely to be feasible or not feasible, based on the topic area and/or research setting.
Methods
Keyword searches were conducted on the International Standard Randomised Controlled Trials Number (ISRCTN) registry to identify all completed feasibility studies which had been conducted in the UK.
Results
A total of 625 records from the 1933 identified were reviewed before it became evident that it would be futile to continue. Of 329 feasibility studies identified, 160 had a known outcome (49%), 133 (83%) trials were deemed to be feasible and only 27 (17%) were reported to be non-feasible. There were therefore too few studies to allow the intended comparison of differences in non-feasible studies by topic and/or setting.
Conclusions
There were too few studies reported as non-feasible to draw any useful conclusions on whether topic and/or setting had an effect. However, the high feasibility rate (83%) may suggest that non-feasible studies are subject to publication bias or that many feasible studies are redundant and may be adding waste to the research pathway.
Added: July 6, 2023
Updated: July 6, 2023
In the context of avoiding research waste, the conduct of a feasibility study before a clinical trial should reduce the risk that further resources will be committed to a trial that is likely to ‘fail’. However, there is little evidence indicating whether feasibility studies add to or reduce waste in research. Feasibility studies funded by the National Institute for Health Research’s (NIHR) Research for Patient Benefit (RfPB) programme were examined to determine how many had published their findings, how many had applied for further funding for a full trial and the timeframe in which both of these occurred. A total of 120 feasibility studies which had closed by May 2016 were identified and each Principal Investigator (PI) was sent a questionnaire of which 89 responses were received and deemed suitable for analysis. Based on self reported answers from the PIs a total of 57 feasibility studies were judged as feasible, 20 were judged not feasible and for 12 it was judged as uncertain whether a full trial was feasible. The RfPB programme had spent approximately £19.5m on the 89 feasibility studies of which 16 further studies had been subsequently funded to a total of £16.8m. The 20 feasibility studies which were judged as not feasible potentially saved up to approximately £20m of further research funding which would likely to have not completed successfully. The average RfPB feasibility study took 31 months (range 18 to 48) to complete and cost £219,048 (range £72,031 to £326,830) and the average full trial funded from an RfPB feasibility study took 42 months (range 26 to 55) to complete and cost £1,163,996 (range £321,403 to £2,099,813). The average timeframe of feasibility study and full trial was 72 months (range 56 to 91), however in addition to this time an average of 10 months (range -7 to 29) was taken between the end of the feasibility study and the application for the full trial, and a further average of 18 months (range 13 to 28) between the application for the full trial and the start of the full trial. Approximately 58% of the 89 feasibility studies had published their findings with the majority of the remaining studies still planning to publish. Due to the long time frames involved a number of studies were still in the process of publishing the feasibility findings and/or applying for a full trial. Feasibility studies are potentially useful at avoiding waste and de-risking funding investments of more expensive full trials, however there is a clear time delay and therefore some potential waste in the existing research pathway.
Added: June 20, 2023
Updated: June 20, 2023
The pericapsular nerve group, or PENG block, has been recently described and shows promise in providing analgesia to the hip joint. However, the effect of this block on postoperative rehabilitation is uncertain. This study compares a preoperative PENG block to a placebo before total hip arthroplasty under spinal anesthesia.
Added: May 31, 2023
Updated: May 31, 2023
Background:
Non-inferiority (NI) trials aim to show a new treatment is no worse than a comparator. These trials have additional complexities in the design and analysis when compared with the more common superiority trials and these complexities can create confusion with researchers completing these trials.
Guidance is available for best practice of NI trials, however most of these focus on industry-funded trials as this is where much of the research has been to date. However, with this increase of more treatments being used in practice within the NHS, NI trials are becoming more common as the benefit of the new treatment is not always the main health outcome but instead a secondary outcome for example side effects. Research suggests there may be differences in the design of industry and publicly funded NI trials and many of the current reviews of NI trials are heavily influenced by industry-funded trials. This creates a gap in the literature to understand how publicly funded NI trials are being designed and how the guidance is translating to this different setting.
Methods: The International Standard Randomised Controlled Trial Number (ISRCTN) web registry and the National Institute for Health Research’s (NIHR) Funding and Awards Library and Journals Library were searched using the term non-inferiority and logical synonyms.
Characteristics of the design, analysis and results, as available, were recorded on a dedicated data extraction spreadsheet.
Added: May 17, 2023
Updated: May 17, 2023
Research Administration as a Profession (RAAAP) Taskforce
Research Administration as a Profession (RAAAP) is an international survey which seeks to identify the key skills, attitudes and behaviours of successful research management and administration (RMA) leaders.
The initial RAAAP survey, held in 2016, was funded by NCURA. It was led by Simon Kerridge (University of Kent, UK) and Stephanie Scott (Columbia University, USA) as Co-PIs, and supported by an international advisory group. In June 2018, the Council of the International Network of Research Management Societies (INORMS) formally endorsed the RAAAP survey as an INORMS initiative
The Taskforce
The RAAAP Taskforce was formed in October 2018 and has evolved over the years. Initially it included many members involved in the initial (2016) RAAAP exercise, and expanded to include representation from each of the INORMS Associations and also some other related associations.
The aim of the RAAAP Taskforce is to continue the work of the initial RAAAP survey, by surveying Research Managers and Administrators every three years (or thereabouts), to collect and analyse longitudinal data about the profession. The Taskforce is also responsible for revising the initial survey and editing future iterations of the survey, as required.
The Surveys
The RAAAP survey now comprises two main sections. One section of the survey is a streamlined version of the 2016 survey. This section is intended to remain the same longitudinally. The other section of the survey focuses on a specific area of interest, of particular relevance at the time – the focus of this section will change with each iteration of the survey.
RAAAP-3 (2022): The third iteration of the RAAAP survey (RAAAP-3) is now live and focusses on “How I Became a Research Manager and Administrator” (HIBARMA) looking at the myriad of ways we find ourselves in this profession.
RAAAP-2 (2019): The second iteration of the RAAAP survey (RAAAP-2) was launched on 1 October 2019. The ‘guest’ section of the survey focused on “Research Impact”.
RAAAP (2016): The first iteration of the RAAAP survey attracted responses from over 2,600 individuals from 64 countries. The survey’s findings were presented at RM and INORMS conferences between 2016 and 2018.
Added: April 14, 2023
Updated: April 14, 2023
Recruitment of participants to, and their retention in, RCTs is a key determinant of research efficiency, but is challenging (Treweek 2013). As a result, trialists and CTUs are increasingly exploring the use of digital tools to identify, recruit and retain participants.
Examples of these tools include:
• Eligibility: searches and interactive record tools to support clinicians screening participants (e.g. Koepcke et al. 2013)
• Recruitment: trial websites, social media and email campaigns to engage with the public
• Retention: Emails, websites, text messages or apps to retain patients in trials and help them meet drug, behavioural adherence or outcome assessment criteria
These tools should benefit research by reducing costs, avoiding waste and speeding delivery of results, improve recruitment reach and reduce recruitment of ineligible patients (around 6% in Koepcke’s study 2013). However, selecting appropriate digital tools is challenging because few have been evaluated rigorously. Also, different success metrics are used: for example, reduced screening time, improved coverage of recruitment or percentage of patients recruited. We need to understand which metrics are most relevant to stakeholders, to ensure wider uptake of effective tools.
We identified only one systematic review in this area, on databases to improve trial recruitment [Koepcke 2014]. The methods used were not rigorous by current standards, and it located only 9 studies using reasonably robust methods. It concluded that databases could reduce the time taken to screen participants, improve participant coverage and actual recruitment rates by between 14% and 6 times, though 4 of 5 studies used an uncontrolled before-after design and the fifth was confounded.
Our view, is that the evidence base for these tools needs to be assembled, mapped and critically appraised before synthesis, where appropriate. Only then can we confidently advise on the wider use of such tools by trialists, or on further primary research.
Added: March 8, 2023
Updated: March 8, 2023
Background: The crisis in research culture is well documented but there is still a tendency for quantity over quality, unhealthy competitive environments, and assessment based on publications, journal prestige and funding. Research institutions need to assess their own practices to promote and advocate for change in the current research ecosystem. To build an understanding of research culture and institution’s current practice, we conducted a review to address the questions ‘What does the evidence say about the ‘problem’ with ‘poor’ research culture, what are the benefits of ‘good’ research culture, and what does ‘good’ look like?
Aims: To examine the peer-reviewed and grey literature to explore the interplay between research culture, open research, career paths, recognition and rewards, and equality, diversity and inclusion, as part of a larger programme of activity at the University of Southampton.
Methods: A scoping review was undertaken. Six databases were searched along with grey literature. Eligible literature had relevance to academic research institutions, addressed research culture, and were published between January 2017 to May 2022. Evidence were mapped and themed to specific categories. The search strategy, screening and analysis took place between April-May 2022.
Results:1666 titles and abstracts, and 924 full text articles were assessed for eligibility. Of these, 254 articles met the eligibility criteria for inclusion. A purposive sampling of relevant websites was drawn from to complement the review. Key areas for consideration were identified across the four themes of job security, wellbeing and equality of opportunity, teamwork and interdisciplinary, and research quality and accountability.
Conclusions: There are opportunities for research institutions to improve their own practice, however institutional solutions cannot act in isolation. Research institutions and research funders need to work together to build a more sustainable and inclusive research culture that is diverse in nature and supports individuals’ well-being, career progression and performance.