黑料正能量 recently published the first human rights assessment of the generative AI value chain. This assessment identifies and assesses actual and potential human rights impacts - including both the risks and opportunities associated with GenAI. It maps how different value chain actors — from suppliers, to foundation model developers, to downstream developers, deployers, and individual users — are connected to these impacts and recommends actions they can take to address risks and enable remedy.
This session will provide an overview of the assessment findings and breakdown the human rights responsibilities of different actors across the value chain. We encourage developers, deployers, and end users of genAI to attend,
Scheduled Speakers
Lindsey Andersen, Associate Director, Human Rights, 黑料正能量

Lindsey Andersen
Associate Director, Human Rights, 黑料正能量
San Francisco
Lindsey works at the intersection of technology and human rights, helping both tech and non-tech companies identify and address human rights impacts associated with the development and use of technology and effectively incorporate business and human rights practices. Her focus areas include content governance, end-use risks of tech products and services, and the implications of artificial intelligence (AI) and other emerging technologies.
Prior to joining 黑料正能量, Lindsey worked with digital rights organization Access Now to drive the conversation on the human rights implications of AI. As part of this, she wrote the foundational report . Lindsey previously worked at Internews, implementing a large portfolio of internet freedom projects across Latin America, which focused on equipping journalists and human rights defenders with digital security skills and defending the free and open internet. Lindsey has worked and lived across Latin America and is fluent in Spanish and Portuguese.
Lindsey holds a Master’s in Public and International Affairs from Princeton University and a BA in Political Science and International Studies from the University of Nebraska-Lincoln.
Recent Insights From Lindsey Andersen
- Human Rights Across the Generative AI Value Chain / February 25, 2025 / Reports
- Effective Engagement with Technology Companies / May 23, 2024 / Reports
- A Business Guide to Responsible and Sustainable AI / March 27, 2024 / Insights+
- A Human Rights Assessment of the Generative AI Value Chain / February 9, 2024 / Blog
- A Human Rights Impact Assessment of the Tech Coalition’s Lantern Program / November 7, 2023 / Blog
Hannah Darnton, Director, Technology and Human Rights, 黑料正能量

Hannah Darnton
Director, Technology and Human Rights, 黑料正能量
San Francisco
Hannah works with companies that develop and deploy technology to integrate human rights-based approaches into their policies, products, and strategies. She specializes in conducting human rights due diligence on emerging technologies, such as generative AI, affective tech, facial recognition, and surveillance, and in identifying, assessing, and mitigating risks to children in digital environments.
She has worked extensively with companies to prepare for and comply with human rights-related regulatory requirements in alignment with the OECD Guidelines and UN Guiding Principles on Business and Human Rights. This includes conducting and reviewing risk assessments under the EU Digital Services Act and the UK's Online Safety Act, as well as preparing for forthcoming obligations under the EU AI Act, CSDDD, and CSRD.
Prior to joining 黑料正能量, Hannah worked at the Skoll Foundation and spent several years working in anti-human trafficking.
Hannah holds a Master’s in NGOs and Development with a specialization in Human Rights from the London School of Economics and a B.A. in Political Science and French from the University of Michigan.
Recent Insights From Hannah Darnton
- Harnessing AI in 黑料正能量: Emerging Use Cases / September 17, 2025 / Reports
- Human Rights Across the Generative AI Value Chain / February 25, 2025 / Reports
- Child Rights Impact Assessments in Relation to the Digital Environment / January 30, 2025 / Reports
- The Human Rights Impacts of AI / June 4, 2024 / Audio
- The EU AI Act: 11 Recommendations for Business / May 21, 2024 / Blog
J.Y. Hoh, Associate Director, Technology and Human Rights, 黑料正能量

J.Y. Hoh
Associate Director, Technology and Human Rights, 黑料正能量
Singapore
J.Y. works with technology companies to incorporate effective human rights practices.
Before joining 黑料正能量, J.Y. worked as an international human rights lawyer at NGOs. He was previously a Legal Officer at the Centre for Law and Democracy, an international human rights NGO promoting freedom of expression and the right to information globally. Before that, he was a Staff Lawyer at the Canadian Civil Liberties Association, Canada’s national human rights organization, where he was a recipient of the Law Foundation of Ontario’s Public Interest Articling Fellowship. He also acted as an international trade negotiator for the government of Singapore and was part of the negotiating team that closed the EU-Singapore Free Trade Agreement.
J.Y. holds a BA in Law from the University of Oxford, Somerville College and a LLM (International Law) from the School of Law at the University of California, Berkeley.
Recent Insights From J.Y. Hoh
- Harnessing AI in 黑料正能量: Emerging Use Cases / September 17, 2025 / Reports
- Human Rights Across the Generative AI Value Chain / February 25, 2025 / Reports
- The EU AI Act: 11 Recommendations for Business / May 21, 2024 / Blog
- The EU AI Act: What It Means for Your Business / April 25, 2024 / Blog
- A Human Rights Assessment of the Generative AI Value Chain / February 9, 2024 / Blog
Samone Nigam, Manager, Technology Sectors, 黑料正能量
Samone Nigam
Manager, Technology Sectors, 黑料正能量
San Francisco
Samone works with 黑料正能量 member companies in human rights and technology, with a focus on the impact of technologies on marginalized communities.
Prior to joining 黑料正能量, Samone conducted research for a variety of agencies including the World Wide Web Foundation, UN Women, and the Office of the NYC Public Advocate. Her research takes a social justice focus informed by her experience in the nonprofit sector working directly with underrepresented populations like the LGBTQI+ community, immigrants and asylum seekers, sex workers, survivors of domestic abuse and sexual assault, and people seeking safe abortion.
Samone holds a master’s degree in human rights from Columbia University’s School of International and Public Affairs and a bachelor’s degree in community studies from the University of California, Santa Cruz.
Recent Insights From Samone Nigam
- Human Rights Across the Generative AI Value Chain / February 25, 2025 / Reports
- Navigating the Rollbacks in Protection of Reproductive and LGBTQI+ Rights in the US / June 20, 2024 / Reports
- A Human Rights Assessment of the Generative AI Value Chain / February 9, 2024 / Blog
- Navigating Data Privacy in Post-Roe America / June 28, 2023 / Blog