Bridging AI and Inequalities Research: A Metascience Framework for Methodological Pluralism

Author: Jason Hung, PhD

Note: This is a postdoc-level grant application submitted to the Marie Skłodowska-Curie Actions (MSCA) to seek funding. This proposal was refined based on an older, unsuccessful application submitted to UKRI. Since the Department of Informatics at King’s College London believes that this grant proposal has high potential, the Department has supported my application for the MSCA funding following a grant rejection by UKRI.

Remark: This page only shows partial details about my MSCA grant application. All details identifying myself, senior academics at the Department of Informatics at King’s, or other related individuals are omitted from this page.

Proposed Research and Innovation Objectives

In social science, a major research problem is that existing studies typically focus on addressing single dimensions of inequality, failing to fully interpret their multidimensionality and intersectionality. Persistent inequalities lead to wasted human potential (Ghosh, 2019), social instability (Houle, 2022), and reduced economic growth (Shih, 2012). Policymakers need robust evidence across a spectrum of intersecting inequalities to design effective interventions that do not accidentally benefit some groups whilst further marginalising others. Inequalities research includes a variety of methodological approaches—randomised controlled trials (RCT), or qualitative or mixed methods are common (Deaton & Cartwright, 2018). Yet, another research problem is comparison and synthesis across methods are difficult from a metascience point of view. Artificial intelligence (AI) research tools, for example, may synthesise research across different methodologies. Yet, existing AI research tools are not a perfect replacement for human researchers. AI research tools excel at streamlining processes and identifying connections between data developed by different methodological approaches, while they still struggle with nuanced methodological evaluations and interpretative synthesis (Agai et al., 2024). If not properly adapted for inequalities research, society risks losing the benefits AI could bring to understanding and addressing complex social disparities that undermine both individual well-being and collective societal progress. The broad research aims of this fellowship are therefore to bring the AI and inequalities research ecosystems together to critically evaluate existing AI research tools, whilst developing frameworks that can better serve diverse research methods in the inequalities field.

This fellowship will advance current understanding in two important ways. To date, there is a lack of evidence suggesting that AI can perfectly address the challenges faced by inequalities researchers using traditional methodologies. In response to the aforementioned broad research aims, for the first research goal, this fellowship will close the research gap by determining if AI can overcome the limitations of traditional single-dimension approaches to inequalities research in a cost-effective fashion, potentially changing dramatically how we conceptualise and study the complex structure of inequalities. If yes, I will document how well AI can perform in inequalities research; if not, I will document the research outputs by highlighting what the existing shortcomings are and suggesting how AI research tools need to be upgraded in order to address the needs of inequalities researchers in the long term. For the second research goal, the fellowship will establish frameworks, evaluation metrics, and benchmarks specifically designed for AI-powered inequalities research—creating new methodological knowledge that bridges AI and social science.

With this fellowship, I aim to deliver research, policy and impact outputs for positive social change. In the first year, my research will focus on generating a critical evaluation framework amongst a community of inequalities researchers spanning social, gender, health, and political domains. Using participatory methods, we will collaboratively evaluate whether AI-powered tools comprehensively, effectively, and efficiently identify and analyse the multidimensionality and intersectionality of inequalities research in social science. Despite the potential benefits of using AI research tools (such as a drastic productivity boost when tackling complex research problems), significant scepticism and ethical concerns remain, Therefore, in section 1.2, I aim to address the needs of diversity in inequalities researchers, such as setting one of the research objectives as to study biased identification and diverse authorship in AI-driven inequalities research outputs and recruiting gender and ethnically balanced researchers and experts for data collection.

To conduct this fellowship, I will collaborate across various departments and faculties at King’s College London, and engage with different UKRI AI Investments where King’s plays a leadership role. This includes working with Responsible AI UK, an EPSRC research consortium focused on establishing world-leading practices for the design, evaluation, regulation, and operation of AI systems. King’s focus on interdisciplinary AI research and engagements with key UKRI AI Investments creates an ideal research environment for conducting this fellowship. To conclude the fellowship, I will organise a workshop to translate all research and policy engagements into actionable recommendations for policymakers and funding bodies. The fellowship will deliver clear evaluation metrics and benchmarks for AI-powered inequalities research practices.

Methodology

Work package 1 (WP1): Co-produce an evaluation framework for AI-powered inequalities research

I will adopt a participatory paradigm to co-develop an evaluation framework that is practical and beneficial for addressing the needs of the AI-powered inequalities research ecosystem. The methodological design will be grounded in co-production principles, wherein researchers, policy experts, and industry practitioners work together to collectively contribute to the generation of knowledge (UKRI, n.d.). In doing so, I aim to collaboratively build an evaluation framework that reflects the diverse perspectives, values, and interests of stakeholders involved in inequalities research.

To supplement details in Section 1.1, the more specific research aims are to co-produce an evaluation framework designed for assessing AI-powered research tools used in multidisciplinary inequalities research, and to critically evaluate these tools for their impact on multidimensional and intersectional research outputs in social science and public health domains. My methodological design aims to address multiple research objectives. The first research objective is to identify key stakeholders involved in AI and inequalities research across different sectors. The second research objective is to identify the current landscape of AI-powered research tools used in inequalities research and their perceived strengths and weaknesses. The third research objective is to identify diverse research methods used in inequalities research across different domains, and what the major opportunities or concerns are when using AI tools to facilitate inequalities research in these domains. The fourth research objective is to co-design criteria and indicators for evaluating AI-powered inequalities research in terms of the ethical implications, bias identification, research integrity risks, and diversity of authorship, as well as the equitable and inclusive representation of diverse communities. This evaluation framework is intended to be practical and adaptable for relevant academic researchers, policymakers, and industry experts.

The work will involve recruiting a diverse group of participants who focus on AI and/or inequalities research in sociological, gender, health, and political domains. Following the co-production and Equality Diversity and Inclusion paradigms, participants will be recruited purposefully to ensure that broad and diverse perspectives and expertise are featured when developing the evaluation framework (Hung et al., 2024). I aim to purposefully recruit early-career researchers (those in their final year of doctoral research or with no more than two years of postdoctoral research experience), policy experts, and key stakeholders from relevant organisations through King’s facilities or UKRI Investments. I will aim to recruit researchers with expertise in various research methods (e.g. quantitative, qualitative, or mixed methods) to participate. I will also aim to maintain gender and ethnic balance when recruiting participants, ensuring that voices from female and ethnic minority early-career researchers are not underrepresented or silenced during the participatory workshops. One of the reasons why this project is timely and relevant is that, as indicated in the second research goal, this fellowship aims to create new methodological knowledge that bridges AI and social science. Here, I acknowledge that using AI research tools for inequalities research is an emerging trend in conducting and evaluating research in social sciences. Early-career researchers are the primary beneficiaries of such a transition, as at the start of their research careers they tend to be more agile and open to adopting new research tools and methodological approaches. Unlike their senior counterparts, early-career researchers are particularly likely to actively seeking new approaches and strategies to define their research trajectory while contributing to their respective fields. Therefore, I aim to recruit early-career researchers rather than their senior counterparts.

I will organise four online participatory workshops (two with researchers and experts in London and two with researchers and experts in Singapore). According to the World Economic Forum, London and Singapore are two of the world’s leading smart cities with the most desirable AI readiness and innovation (North, 2024). Therefore, both London and Singapore are considered leading AI hubs. This series of collaborative workshops will bring together stakeholders to co-develop the evaluation framework through activities such as brainstorming and scenario planning. The first workshop (in both London and Singapore) will focus on defining the shared goals, identifying key principles for the evaluation framework, and agreeing on the scope of the evaluation. The second workshop (in both London and Singapore) will focus on identifying evaluation criteria and indicators. Participants will brainstorm potential evaluation criteria and indicators based on both the stakeholders’ input and background literature review. Through discussion, we will refine the evaluation framework. After holding all participatory workshops, I will also design an online Google survey, which will be distributed to a wide audience of researchers and practitioners across the UK and Singapore, to gather feedback on the drafted evaluation framework and assess its usability and relevance to inequalities researchers. This collaborative methodological approach directly aligns with the aforementioned second research goal of this fellowship.

Both qualitative and quantitative data collected will be analysed. For qualitative data, workshop notes will be analysed using thematic analysis to identify key themes and findings related to the use of AI-powered tools in inequalities research and the development of the evaluation framework. For quantitative data, online Google survey data will be analysed via the software package STATA 18.2 to develop descriptive statistics, in the form of tables or graphics, to summarise stakeholders’ feedback on the drafted evaluation framework.

Work package 2 (WP2): Case studies of collaboratively evaluating existing AI research tools using the co-produced evaluation framework

In WP2, I will undertake a meta-research approach to collaboratively develop a series of case studies to critically evaluate existing AI research tools based on the co-produced evaluation framework. Here, selected reviewers (again I will ensure balanced female and ethnic minority representation) and I will primarily evaluate the outputs generated by frontier AI-powered research tools led by, but not limited to, Google’s AI co-scientist and OpenAI’s Deep Research. My host department will help apply for access to Google’s AI co-scientist as soon as I accept this fellowship, if my application is successful, to avoid any delay in using the AI-powered tool.

In the field of sociology, I will instruct Google’s AI co-scientist to develop four research proposals on social and gender inequalities research: (1) based on scientific topics curated using frontier large language models (LLMs) and (2) using surveying as the research method. Next, in the field of public health, I will instruct Google’s AI co-scientist to develop four research proposals on health inequalities research: (1) based on curated scientific topics using frontier LLMs and (2) using systematic reviews of RCTs-focused health research as the research method. Furthermore, in the field of communication studies, I will instruct Google’s AI co-scientist to develop four research proposals on political inequalities research: (1) based on scientific topics curated using frontier LLMs and (2) using content analysis as the research method. In addition to using Google’s AI co-scientist to develop inequalities research proposals, I will ask OpenAI’s Deep Research to develop inequalities research proposals addressing the same sets of scientific topics. All these AI-powered research outputs will be evaluated using the co-produced evaluation criteria included in the evaluation framework. Here, I will purposefully invite a number of participants who attend the participatory workshops to serve as reviewers. Selected reviewers and I will evaluate whether the AI-powered research outputs can satisfy the aforementioned research objectives #3 and #4 (to identify diverse research methods used in inequalities research across different domains, and what the major opportunities or concerns are when using AI tools to facilitate inequalities research in these domains; and to evaluate AI-powered inequalities research in terms of the ethical implications, bias identification, research integrity risks, and diversity of authorship). If not, I will record all technical shortcomings.

As indicated in Section 1.4, my demonstrated familiarity with these scientific fields and corresponding research methodologies is a testament to my ability to oversee and critically evaluate the outputs generated by AI-powered research tools and check for any technical mistakes or scientific flaws made by the AI system. I am capable of manually screening and critically evaluating the entire structures and research designs presented in the outputs of AI-powered research tools. Malervy (2025) mentions that Google’s AI co-scientist does not operate fully independently; researchers must oversee every step and manually approve the research hypotheses. Therefore, given my expertise in multidisciplinary inequalities research in sociology, public health, and communications and media studies, I would be an ideal candidate to carry out this proposed research.

This meta-research approach will explicitly address the multidimensionality of inequality by analysing AI outputs across different domains, thereby providing a comprehensive and intersectional evaluation of how AI-powered tools perform across a spectrum of social issues. This methodological design directly aligns with the aforementioned first research goal of this fellowship. I will document all the steps of the meta-research approach, including which AI-powered research tools are evaluated, how the outputs are generated by these tools, and how the outputs are collaboratively and critically analysed and evaluated. Maximising the transparency of the entire meta-research approach allows the results to be reproduced by other researchers for further evaluation within the academic community. All expected research outputs (from WP1 and WP2) will be recorded in an Excel file or as recordings, workshop notes, and survey transcripts. The Excel file, recordings, workshop notes, and survey transcripts from WP1 and WP2 will be encrypted before being transferred to the cloud for storage on Google Drive. I expect all research outputs between WP1 and WP3 to be primarily published in the forms of academic and policy-focused publications within one year of the conclusion of this fellowship. This means I anticipate all research outputs will be published by the end of March 2029. Once all research outputs are published, the data can be shared publicly immediately. I plan to publish all supporting data files via the King’s Open Research Data System (KORDS)—the research data repository of King’s. The KORDS allows a simple, secure, and self-deposit way for researchers to upload and share their data. Data shared via the KORDS will be made publicly accessible. I will add a description and a link to the supporting data files in all research publications’ data availability statements. The KORDS allows for long-term preservation of publicly shared datasets, meaning that these datasets will be publicly accessible for a minimum of 10 years.

This fellowship will closely adhere to the principles of open science as a core component of its methodology. My project, especially WP 1 and 2, will critically evaluate frontier “pay-for-use” AI research tools. While using these tools is increasingly common in academic research, the methodological design could pose ethical concerns because they lack transparency regarding their underlying training data, which prevents a full understanding of potential AI-inflicted biases and limitations. I will therefore adhere to the open science principles to ensure the optimisation of verifiability and reproducibility. To mitigate these potential ethical risks, as mentioned, I plan to document all AI research procedures and outputs in detail, and the expected publications will explicitly address the transparency issues. Moreover, I will consider exploring, including the use of, and comparing open-source alternatives (i.e. other than Google AI co-scientist and OpenAI’s Deep Research), such as Meta’s Llama models and DeepSeek-V3.1, to satisfy open science principles and to achieve triangulation and cross-check outputs generated by different frontier, both closed-source and open-source, AI research tools. My methodology is a clear implementation of a Responsible Research and Innovation (RRI) approach, where I aim to ensure that the fellowship results in socially desirable outputs that satisfy the public interest. The project’s structure, as outlined in the research design plan for WP 1 and 2, follows the key principles of RRI: Anticipate, Reflect, Engage and Act.

Work package 3 (WP3): Create and disseminate actionable recommendations

In WP3, I intend to join the network of the Centre for Technology, Ethics, Law and Society (TELOS) at King’s. Here, I aim to collaborate with a network of academics, policy experts, and industry representatives to initiate research-led debates and policy engagements on how technologies and AI can be better regulated to address the legal, ethical, and social implications of their development and application. Such work contributes to evidence-based, legitimate, and effective technology policy, along with the development of a concrete framework, evaluation metrics, and benchmarks for AI-powered inequalities research practices. At the TELOS, I will participate in engagement activities featuring the research theme “Technological Governance,” regularly engaging in policy discussions to focus on addressing data contributors’ rights and responsibilities in case of AI malfunctions, and using synthetic data to assess fairness. Such policy-oriented work satisfies my fellowship goals of, for example, taking into consideration how synthetic data is used when finalising the aforementioned evaluation framework. With the opportunities and support available at King’s, I will be able to conduct technical and policy analysis to support the provision of technical mechanisms to enhance and incentivise effective AI governance. Such work can be incorporated into one of my expected research outputs—a monograph (which will be detailed later)—under the section on policy implications. Alternatively, I may also seek to contribute further to existing debates by writing policy briefs or reports for King’s or any partner institutions or UKRI Investments within King’s network.

In addition, I will join other AI-focused institutions at King’s, such as the Centre for Data Futures. Here, I hope to engage with academics and policy experts under the research theme “Data Empowerment.” I aim to contribute to research-led and policy-focused engagements to enrich debates on how communities can gain agency over the use of their data, as well as revisiting current data governance frameworks to explore how we can strengthen the data empowerment ecosystem. To further include key stakeholders’ priorities and values in research on responsible AI, and to allow the translation of research outputs into positive societal impacts, I will join the Public Participation Working Group at Responsible AI UK. Engagement in collaborative activities that involve different key stakeholders helps enhance civic capacity and participation in building a responsible AI ecosystem within and beyond scientific research. Also, taking public attitudes and interests into account helps researchers, innovators, and regulators design and deliver more inclusive policy and research frameworks. In sum, through research and discussion, I intend to contribute to building an inclusive public participation space and encouraging public dialogue on responsible AI. Overall, WP3 will go beyond the delivery of the research outputs to provide the impact and policy outputs in line with the first and second research goals of this fellowship.