Remember to login before you can view the full description of a study, submit a study, or post topics to the discussion forum.
Added: April 4, 2024
Updated: April 4, 2024
As part of the implementation of the NIHR Research Inclusion strategy 2022-2027 , the NIHR has developed a Disability Framework to guide the organisation in ensuring that disabled people are empowered to fully engage with the NIHR across all of its research funding activities (e.g., application, decision-making, reporting, administration and management). To inform the framework, and to avoid assumptions relating to barriers individuals are facing, we conducted a study to understand the experiences of disabled people when engaging with NIHR, and the challenges that may have hindered or prevented successful engagement. Engaging with a wide range of NIHR stakeholders through an anonymous online survey and focus groups, we identified good practice as well as barriers at NIHR, leading to recommendations for improvement to access and inclusion. These recommendations were incorporated into the NIHR Disability Framework which was published in March 2024.
Added: March 20, 2024
Updated: March 22, 2024
Last year, the education department of Hasselt University published the first version of the UHasselt Framework on using Generative Artificial Intelligence (GenAI). Along with an update of this initial framework, we want to expand the generic guidelines with an overview of GenAI tools for specific research purposes used at the institution.
Added: March 10, 2024
Updated: March 10, 2024
The frequency and cognizance of withdrawals and retractions (WAR) have been increasing across science. However, no work so far has evaluated the frequency and causes of WAR of Cochrane systematic reviews, which impact policy and practice globally. A retrospective meta-scientific study of Cochrane systematic reviews, published during 1996-2023, that were marked as WAR, was retrieved from PubMed. Data was extracted with independent review and validation related to year of publication, country, editorial group, World Bank income classification of country of co-authors, and listed reasons for WAR. Protocols were excluded. We found that outdated articles and authors' unavailability for updates were common reasons for WAR. This research sheds light on maintaining the reliability of evidence in healthcare.
Added: March 2, 2024
Updated: March 2, 2024
Purpose
Bar charts of numerical data, often known as dynamite plots, are unnecessary and misleading. Their tendency to alter the perception of mean’s position through the within-the-bar bias and their lack of information on the distribution of the data are two of numerous reasons. The machine learning tool, Barzooka, can be used to rapidly screen for different graph types in journal articles.
We aim to determine the proportion of original research articles using dynamite plots to visualize data, and whether there has been a change in their use over time.
Methods
Original research articles in nine surgical fields of research were sampled based on MeSH terms and then harvested using the Python-based biblio-glutton-harvester tool. After harvesting, they were analysed using Barzooka. Over 40 000 original research articles were included in the final analysis. The results were adjusted based on previous validation data with 95% confidence bounds. Kendall τ coefficient with the Mann–Kendall test for significance was used to determine the trend of dynamite plot use over time.
Results
Eight surgical fields of research showed a statistically significant decrease in use of dynamite plots over 10 years. Oral and maxillofacial surgery showed no significant trend in either direction. In 2022, use of dynamite plots, dependent on field and 95% confidence bounds, ranges from ~30% to 70%.
Conclusion
Our results show that the use of dynamite plots in surgical research has decreased over time; however, use remains high. More must be done to understand this phenomenon and educate surgical researchers on data visualization practices.
Added: February 8, 2024
Updated: February 8, 2024
Over the last decade, growing emphasis has been placed on research funding and performing organisations to effectively demonstrate how their use of public and donated funds benefits the research ecosystem and wider society. Focus on greater accountability, transparency, and adding value has necessitated that research organisations develop and optimise robust methods and approaches to evaluating the outcomes and impacts of their research programmes, fellowships, and other types of funding investments. Moreover, there is emphasis on improving how research data is collected, used and reported, and that the methods used capture a broad and accurate picture of outcomes and impacts from funding. Several reviews of models and applications for research evaluations exist, but these are either limited to health research, provide an overview of approaches, or only capture research impact assessments. A broader scoping review of methods and approaches to evidencing both outcomes and impacts from different types of research investments is now needed and should include critical analysis of the available frameworks and methods, and the types of outcomes and impacts these can measure.
Added: February 7, 2024
Updated: February 7, 2024
The past decade has seen efforts to digitalise, standardise and automate research processes, with particular emphasis on core organisational activities such as management of research data, administration, and planning. Artificial Intelligence (AI) could offer some useful insights with regards to widely encountered pressures in research, such as increasing administrative demand, growing data requirements, and tighter regulations on research activities. Technological advances, including AI, are becoming widely used in commercial and business sectors but remain to be harnessed in research, with funders and institutions only now beginning to explore the utility of AI, and investing in AI to determine how they can ethically benefit from these technological advances. With a recent increase in the use of generative AI tools, such as ChatGPT, in academic writing and grant applications, organisations in the UK are beginning to respond and calling for evidence on the benefits and drawbacks of using AI for research-focused activities. There is a need for better understanding of what AI is, what it isn't, its implications for funders, and where it may have potential benefit for funders' administrative and research management (RMA) processes.
Added: January 31, 2024
Updated: January 31, 2024
Digital methods (hereby referred to as digital) use technology (such as mobile applications) to allow participants to take part in research from their homes. However, it is well known that some groups are under-served in research, and it is these same groups that are most at risk of not being able to fully take part in research that uses digital.
If digital is a barrier to trial participation in different groups, then research findings will not represent their needs and they may suffer poorer health outcomes.
The purpose of this project is to record real-life experiences of members of the public that have used, or chosen not to use, digital, to understand what can be done to make it easier for those wanting to use it and to explore solutions to improve digital for different groups wanting to take part in the research.
Added: December 14, 2023
Updated: December 14, 2023
The HRB is fully committed to the DORA principles that research and researchers should be assessed on the merit of their research as a whole and not solely on journal-based metrics, and that the value and impact of all research outputs be considered. As signatory of the DORA declaration and the Coalition for Advancing Research Assessment (COARA), the HRB support a research environment where importance is placed on the intrinsic value and relevance of research and its potential impact in society .
Since 2016 the HRB has been using a narrative-like CV, now referred as HRB Career Track CV, in its research career funding schemes, where the people is at the core. We are committed to the fair assessment of researchers based on the merits of their contribution as a whole.
The use of a narrative-like CV is an opportunity to reduce the massive influence of journal-based indicators in grant reviews and to promote a more holistic assessment which recognises societal outcomes of research in addition to the generation of knowledge.
The HRB is also assessing the experience of the users by surveying applicants, mentors, where applicable, and reviewers in different career’s schemes. To date, we have conducted two rounds of assessment of user experience.
• Round 1 Surveys 2021-2022: What are the results of the first round of the users’ experience?
• Rounds 2 Surveys 2023-2024: The second round, which is based on the Joint Funders Group shared evaluation framework, is currently underway. This second survey includes an additional module which collects basic demographic and EDI variables of those surveyed, which will enable us to glean richer information within and across different respondents. The findings of the second round of surveys are expected in late 2024.
Added: November 17, 2023
Updated: November 17, 2023
This study (funded by a Research England Enhancing Research Culture Fund) obtains qualitative insight into the work patterns and work-life balance of staff who undertake any form of research at the University of Southampton. Participants are invited to voluntarily take part in an online focus group and/ or an online one-to-one interview and are recruited from all faculties using a stratified, non-random sample. To date [November 2023], 50 participants have taken part in one-to-one interviews and 15 people have taken part in focus groups. This study will be ongoing until August 2024.
Added: October 7, 2023
Updated: October 7, 2023
Title: UK Researcher views on Blinding in Complex Intervention Randomised Controlled Trials: A Survey of UKCRC and TMRP Researchers
Background:
Blinding, the practice of concealing treatment allocation, plays a crucial role in randomized controlled trials (RCTs). However, in the context of complex interventions, its feasibility and impact on study quality have been subjects of debate. This survey aims to capture the perceptions of researchers affiliated with the UK Clinical Research Collaboration (UKCRC) and the Trial Methodology Research Partnership (TMRP) in the United Kingdom regarding blinding in complex intervention RCTs.
Methods:
This cross-sectional study employed an online survey comprising 30 questions. Utilizing a structured questionnaire, we targeted the population of UKCRC and TMRP researchers. The Likert scale was employed to assess respondents' views, ranging from "Strongly Disagree" to "Strongly Agree." In addition to quantitative responses, a free-text box allowed for qualitative input. The survey collected data from a substantial sample size of UKCRC and TMRP researchers and employed descriptive statistical analyses.
Results:
Our findings indicate a diverse spectrum of perceptions among UKCRC and TMRP researchers regarding blinding in complex intervention RCTs. While many respondents acknowledged the challenges inherent in blinding within this context, responses varied in terms of its perceived importance, influence on securing funding, and alignment with critical appraisal tools. Notably, the recent NIHR-MRC Framework (2021) emerged as a focal point in shaping attitudes toward blinding.
Conclusion:
This survey presents interesting researchers' perceptions regarding blinding in complex intervention RCTs. The results emphasise the necessity for continued dialogue within the research community to address methodological challenges and adapt guidelines to better suit the complex intervention context. A comprehensive understanding of these perceptions has the potential to enhance the quality and relevance of clinical research in this evolving field.