Skip to content

Commit 206bbef

Browse files
authored
Merge pull request #399 from NGO-Algorithm-Audit/feature/structural_edits
EN NL > updated new paper UBDT en updated some content on page
2 parents 391ef00 + fd03b86 commit 206bbef

4 files changed

Lines changed: 43 additions & 19 deletions

File tree

content/english/technical-tools/BDT.md

Lines changed: 21 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -23,8 +23,8 @@ quick_navigation:
2323
url: "#local-only"
2424
- title: Supported by
2525
url: "#supported-by"
26-
- title: Awards and acknowledgements
27-
url: "#awards-acknowledgements"
26+
- title: Awards and publications
27+
url: "#awards-publications"
2828
- title: Summary
2929
url: "#summary"
3030
- title: Team
@@ -64,7 +64,11 @@ team:
6464
- image: /images/people/MJorgensen.jpeg
6565
name: Mackenzie Jorgensen PhD
6666
bio: |
67-
Researcher Alan Turing Institute, London
67+
Postdoctoral Research Fellow, Northumbria University
68+
- image: /images/people/JParie.jpg
69+
name: Jurriaan Parie
70+
bio: |
71+
Director, Algorithm Audit
6872
---
6973

7074
<!-- Promobar -->
@@ -204,7 +208,7 @@ The HBAC algorithm maximizes the difference in bias variable between clusters. T
204208

205209
The unsupervised bias detection tool has been applied in practice to audit a Dutch public sector risk profiling algorithm. Our [team](/technical-tools/bdt/#team) documented this case in a scientific paper. The tool identified proxies for students with a non-European migration background in a risk profiling algorithm. Specifically the most deviating cluster contains above average students following vocational education and has higher-than-average students living far away from their parent(s)' address, which turned out to be correlate significantly with students with a non-European migration background. Deviations in the control process could therefore also have been found if aggregation statistics on the origin of students had not been available. The results are also described in Appendix A of the below report. This report was sent to <a href="https://www.tweedekamer.nl/kamerstukken/detail?id=2024D20431&did=2024D20431" target="_blank">Dutch parliament</a> on 22-05-2024.
206210

207-
{{< embed_pdf url="/pdf-files/technical-tools/UBDT/20250205 Auditing a Dutch Public Sector Risk Profiling Algorithm.pdf" url2="/pdf-files/algoprudence/TA_AA202402/TA_AA202402_Addendum_Preventing_prejudice.pdf" width_mobile_pdf="12" width_desktop_pdf="6" >}}
211+
{{< embed_pdf url="/pdf-files/technical-tools/UBDT/20260215 Auditing a Dutch Public Sector Risk Profiling Algorithm.pdf" url2="/pdf-files/algoprudence/TA_AA202402/TA_AA202402_Addendum_Preventing_prejudice.pdf" width_mobile_pdf="12" width_desktop_pdf="6" >}}
208212

209213
{{< container_close >}}
210214

@@ -260,19 +264,19 @@ In 2024, the SIDN Fund <a href="https://www.sidnfonds.nl/projecten/open-source-a
260264

261265
{{< container_close >}}
262266

263-
<!-- Awards and acknowledgements -->
267+
<!-- Awards and publications -->
264268

265-
{{< container_open title="Awards and acknowledgements" icon="fas fa-medal" id="awards-acknowledgements" >}}
269+
{{< container_open title="Awards and publications" icon="fas fa-medal" id="awards-publications" >}}
266270

267-
This tool has received awards and is acknowledged by various <a href="https://github.com/NGO-Algorithm-Audit/unsupervised-bias-detection?tab=readme-ov-file#contributing-members" target="_blank">stakeholders</a>, including civil society organisations, industry representatives and academics.
271+
This tool has received awards and is acknowledged by various <a href="https://github.com/NGO-Algorithm-Audit/unsupervised-bias-detection?tab=readme-ov-file#contributing-members" target="_blank">stakeholders</a>, including civil society organisations, industry representatives and academic outlets.
268272

269273
{{< accordions_area_open id="awards-accordion" >}}
270274

271-
{{< accordion_item_open title="Finalist Stanford’s AI Audit Challenge 2023" image="/images/partner logo-cropped/StanfordHAI.png" tag1="06-2023" >}}
275+
{{< accordion_item_open title="IASEAI’26 presentation" image="/images/BDT/IASEAI_logo.png" tag1="2026" >}}
272276

273277
##### Description
274278

275-
Under the name Joint Fairness Assessment Method (JFAM) the unsupervised bias detection tool has been selected as a finalist in <a href="https://hai.stanford.edu/ai-audit-challenge-2023-finalists" target="_blank">Stanford’s AI Audit Competition 2023</a>.
279+
The [scientific paper](/#scientific-paper) of the tool was presented during the the International Association for Safe and Ethical Artificial Intelligence (<a href="https://www.iaseai.org" target="_blank">IASEAI’26</a>).
276280

277281
{{< accordion_item_close >}}
278282

@@ -284,6 +288,14 @@ The unsupervised bias detection tool is part of OECD's <a href="https://oecd.ai/
284288

285289
{{< accordion_item_close >}}
286290

291+
{{< accordion_item_open title="Finalist Stanford’s AI Audit Challenge 2023" image="/images/partner logo-cropped/StanfordHAI.png" tag1="2023" >}}
292+
293+
##### Description
294+
295+
Under the name Joint Fairness Assessment Method (JFAM) the unsupervised bias detection tool has been selected as a finalist in <a href="https://hai.stanford.edu/ai-audit-challenge-2023-finalists" target="_blank">Stanford’s AI Audit Competition 2023</a>.
296+
297+
{{< accordion_item_close >}}
298+
287299
{{< accordions_area_close >}}
288300

289301
{{< container_close >}}

content/nederlands/technical-tools/BDT.md

Lines changed: 22 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -22,8 +22,8 @@ quick_navigation:
2222
url: "#local-only"
2323
- title: Ondersteund door
2424
url: "#supported-by"
25-
- title: Prijzen en ondersteuning
26-
url: "#awards-acknowledgements"
25+
- title: Prijzen en publicaties
26+
url: "#awards-publications"
2727
- title: Samenvatting
2828
url: "#summary"
2929
- title: Team
@@ -64,7 +64,11 @@ team:
6464
- image: /images/people/MJorgensen.jpeg
6565
name: Mackenzie Jorgensen PhD
6666
bio: |
67-
Onderzoeker Alan Turing Institute, Londen
67+
Postdoctorale onderzoeker, Northumbria University
68+
- image: /images/people/JParie.jpg
69+
name: Jurriaan Parie
70+
bio: |
71+
Directeurr, Algorithm Audit
6872
---
6973

7074
<!-- Promobar -->
@@ -206,7 +210,7 @@ Het HBAC-algoritme maximaliseert het verschil in bias variabele tussen clusters.
206210

207211
De unsupervised bias detectie tool is in de praktijk toegepast om een risicoprofileringsalgoritme van de Dienst Uitvoering Onderwijs (DUO) te auditen. Ons [team](/technical-tools/bdt/#team) heeft deze casus uitgewerkt in een wetenschappelijke paper. De tool identificeerde proxies voor studenten met een niet-Europese migratieachtergrond in het risicoprofileringsalgoritme, specifiek opleidingsniveau en de afstand tussen het adres van de student en dat van hun ouder(s). Afwijkingen in het controleproces hadden dus ook gevonden kunnen worden als CBS-data over de herkomst van studenten niet beschikbaar was geweest. De resultaten worden ook beschreven in Appendix A van het onderstaande rapport. Dit rapport is op 22-05-2024 naar de <a href="https://www.tweedekamer.nl/kamerstukken/detail?id=2024D20431&did=2024D20431" target="_blank">Tweede Kamer</a> gestuurd.
208212

209-
{{< embed_pdf url="/pdf-files/technical-tools/UBDT/20250205 Auditing a Dutch Public Sector Risk Profiling Algorithm.pdf" url2="/pdf-files/algoprudence/TA_AA202402/TA_AA202402_Addendum_Vooringenomenheid_voorkomen.pdf" width_mobile_pdf="12" width_desktop_pdf="6" >}}
213+
{{< embed_pdf url="/pdf-files/technical-tools/UBDT/20260215 Auditing a Dutch Public Sector Risk Profiling Algorithm.pdf" url2="/pdf-files/algoprudence/TA_AA202402/TA_AA202402_Addendum_Vooringenomenheid_voorkomen.pdf" width_mobile_pdf="12" width_desktop_pdf="6" >}}
210214

211215
{{< container_close >}}
212216

@@ -262,19 +266,19 @@ In 2024 ondersteunde het SIDN Fonds <a href="https://www.sidnfonds.nl/projecten/
262266

263267
{{< container_close >}}
264268

265-
<!-- Prijzen en ondersteuning -->
269+
<!-- Prijzen en publicaties -->
266270

267-
{{< container_open title="Prijzen en ondersteuning" icon="fas fa-medal" id="awards-acknowledgements">}}
271+
{{< container_open title="Prijzen en publicaties" icon="fas fa-medal" id="awards-publications">}}
268272

269-
De tool heeft prijzen ontvangen en wordt ondersteund door verschillende <a href="https://github.com/NGO-Algorithm-Audit/unsupervised-bias-detection?tab=readme-ov-file#contributing-members" target="_blank">belanghebbenden</a>, waaronder maatschappelijke organisaties, vertegenwoordigers uit de industrie en academici.
273+
De tool heeft prijzen ontvangen en is erkend door verschillende <a href="https://github.com/NGO-Algorithm-Audit/unsupervised-bias-detection?tab=readme-ov-file#contributing-members" target="_blank">belanghebbenden</a>, waaronder maatschappelijke organisaties, vertegenwoordigers uit de industrie en academische outlets.
270274

271275
{{< accordions_area_open>}}
272276

273-
{{< accordion_item_open title="Finalist Stanford’s AI Audit Challenge 2023" image="/images/partner logo-cropped/StanfordHAI.png" tag1="06-2023" >}}
277+
{{< accordion_item_open title="IASEAI’26 presentatie" image="/images/BDT/IASEAI_logo.png" tag1="2026" >}}
274278

275-
##### Beschrijving
279+
##### Description
276280

277-
Onder de naam Joint Fairness Assessment Method (JFAM) is de unsupervised bias detectie tool geselecteerd als finalist voor <a href="https://hai.stanford.edu/ai-audit-challenge-2023-finalists" target="_blank">Stanford’s AI Audit Competition 2023</a>.
281+
De [wetenschappelijke paper](/#scientific-paper) over de tool werd gepresenteerd tijds het International Association for Safe and Ethical Artificial Intelligence (<a href="https://www.iaseai.org" target="_blank">IASEAI’26</a>).
278282

279283
{{< accordion_item_close >}}
280284

@@ -286,6 +290,14 @@ De unsupervised bias detectie tool maakt deel uit van de <a href="https://oecd.a
286290

287291
{{< accordion_item_close >}}
288292

293+
{{< accordion_item_open title="Finalist Stanford’s AI Audit Challenge 2023" image="/images/partner logo-cropped/StanfordHAI.png" tag1="2023" >}}
294+
295+
##### Beschrijving
296+
297+
Onder de naam Joint Fairness Assessment Method (JFAM) is de unsupervised bias detectie tool geselecteerd als finalist voor <a href="https://hai.stanford.edu/ai-audit-challenge-2023-finalists" target="_blank">Stanford’s AI Audit Competition 2023</a>.
298+
299+
{{< accordion_item_close >}}
300+
289301
{{< accordions_area_close >}}
290302

291303
{{< container_close >}}

static/images/BDT/IASEAI_logo.png

21.8 KB
Loading

0 commit comments

Comments
 (0)