Skip to content

Commit 0d3aa2d

Browse files
committed
Merge branch 'master' into kf_edits
2 parents 62021f4 + 03b2fb5 commit 0d3aa2d

18 files changed

Lines changed: 423 additions & 258 deletions

File tree

assets/scss/_common.scss

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1210,6 +1210,8 @@ $accent: #FF4081;
12101210
position: sticky;
12111211
top: 80px;
12121212
z-index: 1000;
1213+
overflow-y: scroll;
1214+
max-height: 90vh;
12131215
}
12141216
}
12151217
body {

content/english/_index.md

Lines changed: 15 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -85,39 +85,31 @@ Activity_Feed:
8585
Areas_of_AI_expertise:
8686
title: Expertise
8787
enable: true
88-
width_m: 4
88+
width_m: 6
8989
width_s: 12
9090
feature_item:
9191
- name: Sociotechnical evaluation of generative AI
9292
icon: fas fa-robot
9393
content: >
94-
Evaluating Large Language Models (LLMs) and other general-purpose AI
95-
models for robustness, privacy and AI Act compliance. Based on
96-
real-world examples, are developing a framework to analyze content
97-
filters, guardrails and user interaction design choices. <a href="/technical-tools/eval-gen-ai" style="text-decoration:
98-
underline;">Learn more</a> about our evaluation
99-
framework.
100-
- name: AI Act implementation and standards
94+
We evaluate Large Language Models (LLMs) and other generative AI
95+
applications relating to guardrails, privacy and AI Act compliance. Based on a mature RAG application of the Dutch judiciary, we have developed a <a href="/technical-tools/eval-gen-ai" style="text-decoration: underline;">validation framework</a> to analyze content filters, embedding strategies and user interaction design choices. <a href="/knowledge-platform/project-work/#AI-safety" style="text-decoration: underline;">Read more</a> about AI Safety project work we conduct for the AI Office of the European Commission.
96+
- name: AI Act implementation and AI standards
10197
icon: fas fa-certificate
10298
content: >
10399
Our open-source <a href="/technical-tools/implementation-tool/"
104-
style="text-decoration: underline;">AI and Algorithms Qualification Toolkit (AI AQT)</a> helps
105-
organizations identifying AI systems and assigning the right risk
106-
category. As a member of Dutch and European standardization
107-
organisations NEN and CEN-CENELEC, Algorithm Audit monitors and
108-
contributes to the development of standards for AI systems. See also our
109-
public <a href="/knowledge-platform/standards/" style="text-decoration:
110-
underline;">knowledge base</a> on standardization
111-
- name: Bias analysis
100+
style="text-decoration: underline;">AI and Algorithms Qualification Toolkit (AI AQT)</a> helps organizations identify algorithms and AI systems at scale and helps in assigning the appropriate risk category. As a member of Dutch and European standardization organisations NEN and CEN-CENELEC, Algorithm Audit monitors and contributes to the development of standards for the AI Act. See also our public <a href="/knowledge-platform/standards/" style="text-decoration: underline;">knowledge base</a> on standardization.
101+
- name: Bias analysis and non-discrimination
112102
icon: fas fa-chart-pie
113103
content: >
114104
We evaluate algorithmic systems both from a qualitative and quantitative
115-
dimension. Besides expertise about data analysis and AI engineering, we
116-
possess have in-depth knowledge of legal frameworks concerning
117-
non-discrimination, automated decision-making and organizational risk
118-
management. See our <a href="/knowledge-platform/knowledge-base/"
119-
style="text-decoration: underline;">public standards</a> how to deploy
120-
algorithmic systems responsibly.
105+
dimension, including analysis of objective justification as a key element of EU non-discrimination law. In addition to expertise in data analysis and statistics, Algorithm Audit has legal expertise relating to the GDPR, specifically prohibited automated decision-making, and organizational risk management. See our <a href="/knowledge-platform/knowledge-base/"
106+
style="text-decoration: underline;">public standards</a> for the responsible use of algorithmic systems.
107+
- name: Auditing and legal compliance
108+
icon: fas fa-scroll
109+
content: >
110+
We audit algorithmic systems from organisational, technial and legal
111+
perspective. We also offer support with interpretation and implementation of the AI Act and GDPR legal texts, annexes and guidelines from the European Commission, including issues regarding definitions, high-risk applications and conformity assessment. Our <a href="/knowledge-platform/knowledge-base/"
112+
style="text-decoration: underline;">audit reports and white papers</a> contribute to public knowledge how legal compliance can be realised.
121113
button_text: Explore collaboration
122114
button_link: /knowledge-platform/project-work/#form
123115
Distinctive_in:
@@ -131,7 +123,7 @@ Distinctive_in:
131123
content: >
132124
We are pioneering the future of responsible AI by bringing together
133125
expertise in statistics, software development, law and ethics. Our work
134-
is widely read throughout Europe and beyond.
126+
is widely read throughout the Netherlands, Europe and beyond.
135127
- name: Not-for-profit
136128
icon: fas fa-seedling
137129
content: >

0 commit comments

Comments
 (0)