Area of investment and support

Area of investment and support: Bridging responsible AI divides (BRAID)

BRAID aims to bridge the divides between academic, industry, policy and regulatory work on responsible AI (artificial intelligence), integrating arts, humanities and social science research within a responsible AI ecosystem.

Duration:
October 2022 to October 2028
Partners involved:
Arts and Humanities Research Council (AHRC), Ada Lovelace Institute, BBC

The scope and what we're doing

BRAID (previously ‘Enabling a responsible ecosystem’) is a six-year national research programme funded by UKRI Arts and Humanities Research Council (AHRC), led by programme directors Professor Ewa Luger and Professor Shannon Vallor at the University of Edinburgh in partnership with the Ada Lovelace Institute and BBC Research and Development.

BRAID seeks to use innovative arts and humanities research to help enrich, expand, and connect a mature and sustainable responsible AI ecosystem. ‘Responsible AI’ is the practice of developing and using AI technologies that address ethical concerns and minimise the risk of negative consequences.

Aims of the programme

  1. Learn lessons from the first wave of responsible AI work to identify the barrier and divides to a mature and effective responsible AI ecosystem.
  2. Bring the arts and humanities more fully and centrally into the responsible AI ecosystem, to invest responsible AI with more depth and breadth.
  3. Foster more honest and diverse public conversations about positive humane vision for AI, not just harm reduction.
  4. Widen access to responsible AI knowledge and practices in the UK and demonstrate their value in concrete contexts, through policy collaboration, fellowships and demonstrator projects.

BRAID scoping to embed responsible AI in context

Ten projects have been funded, totalling £2.2 million. The projects started 1 February 2024 for six months.

The projects will define what responsible AI is across sectors such as education, policing and the creative industries. They will produce early-stage research and recommendations to inform future work in this area.

The projects illustrate how the UK is at the forefront of defining responsible AI and exploring how it can be embedded across key sectors.

Read more about the BRAID scoping projects.

BRAID fellowships

The fellowships aim to enable and support AI research and development to develop solutions for the responsible use of AI.

The fellowships are structured so that researchers work directly with non-academic stakeholders from industry, the third sector, and government, on current responsible AI challenges. The aim is to develop new insights, drive responsible innovation, and deliver impact in the field.

We have awarded 17 fellowships with partners including Microsoft, Diverse AI and the BBC.

Read more about the BRAID fellowships.

BRAID Responsible AI Demonstrators

The demonstrator funding opportunities launched in April 2024, with projects due to be funded for three years from January 2025.

The demonstrator projects will seek to address real-world challenges facing sectors, businesses, communities and publics in the responsible development and application of AI technologies. Demonstrators will involve an intervention designed to advance responsible AI in a specific context.

The ambition is to demonstrate the transformative power of embedding responsible, human-centred approaches and thinking at the earliest stages of the AI research and development pipeline, and across the AI lifecycle.

Why we're doing it

The research programme is dedicated to integrating arts, humanities and social science research more fully into the responsible AI ecosystem. It also aims to bridge the divides between academic, industry, policy and regulatory work on responsible AI.

This research is supporting the development of the AI technologies of the future, and as a technology that:

  • the public can trust
  • businesses will adopt
  • addresses current societal, economic and environmental challenges

This programme will address a key contemporary challenge in data ethics and regulation for AI. It is acting on the recommendations set out in the UKRI statement of opportunities on AI which are aligned to the government’s National AI Strategy.

The development and roll-out of AI and related data-driven technologies should be responsible, ethical and accountable by default. This means that the regulations, standards and policies that govern them need to encourage these practices in ways that foster innovation and provide benefits to the UK and our different communities.

Read our blog on how the arts and humanities are crucial to responsible AI.

Opportunities, support and resources available

Subscribe to the AHRC AI mailing list to be notified of future AI-related opportunities and events.

See the funding call for Responsible AI Demonstrators.

BRAID opportunities will be published on the funding finder.

Last updated: 19 July 2024

This is the website for UKRI: our seven research councils, Research England and Innovate UK. Let us know if you have feedback or would like to help improve our online products and services.