PATH-AI Interim Report

PATH-AI Interim Results

In December 2021, the UK team published our PATH-AI Interim Report, which presented some of the results from the first phase of the project. In this article, we summarise some of these findings, as well as our plans for the final year of PATH-AI research.

In our background research on scholarship about privacy, agency, and trust, we found that these concepts have been understood rather differently in the UK and Japan. Very broadly, in the UK, the liberal tradition of the autonomous and self-possessing individual has tended to predominate, with greater emphasis on individual rights and freedom to pursue one’s personal self-interest. In Japan, these values have tended to be understood in a more relational, less absolute way, taking greater account of interdependent social relations and contextual identities.

However, our approach is informed by socio-cultural anthropology and science and technology studies, and we recognise that cultures are neither bounded nor monolithic, and it is important not to suggest simplistic or essentialised binary oppositions of East vs West. There are also many correlations and convergences between the UK and Japan driven by global flows of capital, information, labour, technologies, and neoliberal ideology, as well as by the imperative for interoperable legal and regulatory frameworks and standards to facilitate international trade. In particular, we were interested in whether previous scholarship on the differences between UK and Japanese understandings of privacy, agency, and trust were still reflected in actual views and practices relating to emerging data-driven technologies. 

To explore these concepts in practice, between April and August 2021, the PATH-AI team in the UK conducted interviews with 26 members of the public who responded to a request for research participants kindly publicised by Camden Council and Carers UK. We also approached and interviewed 17 experts drawn from across the public sector, business, academia, and third sector, who were working on AI and/or digital healthcare technology. The PATH-AI team in Japan conducted a survey with 26 members of the general public and 23 experts, while a further 3 experts were interviewed. A similar question set (translated from English into Japanese) was used as the basis for all of the interviews and questionnaires.

We asked people about their views on and use of three emerging data-driven healthcare technologies: digital contact tracing apps, symptom checking tools, and care robots. Digital contact tracing apps included the NHS COVID-19 app for England and Wales and Japan’s national COCOA app. Medical symptom checking tools included websites such as the NHS’ online 111 service as well as symptom checking services from private providers that employed machine learning techniques. Care robots were defined for the purposes of the study as physical robots that could hold simple conversations and might be used for tasks such as keeping older people company, telling jokes, or reminding users to take their medicine. We also asked questions designed to draw out their views on privacy, agency, and trust in relation to these and other AI and data-driven technologies. 

The decision to focus on these case studies was driven in part by the COVID-19 pandemic, during which such tools gained prominence as ways to reduce the spread of infection and cope with the shifting demands placed on healthcare systems. These are technologies that seem to offer huge potential benefits, but also raise many questions about ethics and governance. Although they serve different functions, what they have in common is their digital and data-driven nature and their promise of revolutionising health and care systems by transforming the very infrastructure of healthcare.

Some of our key findings from these interviews and surveys included the following:

  • Across the interviews and surveys conducted with both UK and Japanese research participants, a thread that connected privacy, agency, and trust in the context of emerging AI and data-intensive digital technologies was the sense of growing asymmetries – of data, informed choice, resources, capabilities, and ultimately power – between users, governments, and companies. 
  • These asymmetries were fuelled by black-box digital tools, apps, and algorithmic systems, often developed and deployed in a top-down, paternalistic manner by tech companies and governments alike. This left many respondents feeling confused about what data was being collected about them and how it was being used, feeding feelings of disempowerment and distrust, with little participatory parity or agency.
  • While the recent trend to greater anxiety about privacy seems to reflect a profound and growing societal distrust of both governments and technology companies, many experts argued that the implementation of data protection legislation has so far tended only to confuse citizens while not preventing companies from collecting ever-greater amounts of personal data.
  • Interestingly, in Japan, respondents expressed just as much concern about individual data privacy as in the UK, suggesting that the common argument that privacy is understood in a more contextual and relational manner in Japan is less convincing, at least in the case of data-driven technologies.
  • Respondents in both the UK and Japan were concerned that citizens, government, and regulatory and legal systems were simply not able to keep up with the rapidly increasing complexity of technological developments. 
  • Differences identified between the views of respondents in the UK and Japan included contrasting concerns reflecting differing healthcare systems. In the UK, many interviewees worried about tech companies leveraging their greater capacity for data collection and analysis to privatise the NHS by the back door. In Japan, the larger concern was that the country was falling behind less scrupulous global competitors due to siloed and inefficient bureaucratic structures and an overly cautious attitude from the private sector.
  • There was also a notable difference in how AI and related technologies were viewed, with Japanese respondents somewhat more optimistic and seemingly more comfortable with the idea of, for example, the wider deployment of care robots – even though, perhaps surprisingly, they were less likely to have downloaded a digital contact tracing app, or to have used a symptom checking tool, or to have heard of or interacted with care robots.
  • However, while UK participants worried that emerging technologies might cause harm due to inadequate design and implementation, many Japanese participants seemed more worried that they would work too well or become too powerful, introducing dangers of lack of control or the creation of a future society ruled by automated decisions and action that would be less easily contested.
  • Across both Japan and the UK, experts and members of the public called for greater public education and far clearer communication about these increasingly complex technologies. They also called for more public consultations and meaningful participation in governance and regulation. Several experts argued for the need to embrace a collective and more horizontal approach that struck a new reconciliation between individual rights and public benefits. 

We are currently working on a longer Working Paper that explores these findings in greater depth. In the next phase of the project, we aim to build on these results in workshops that will expand the scope of our intercultural examination of AI ethics and governance beyond the UK and Japan. In these workshops, we will engage with digital rights organisations from around the world, focusing on how global AI ethics and governance frameworks address or fail to address interculturally distinct concerns. 

You can read the full Interim Report here