-
Notifications
You must be signed in to change notification settings - Fork 3
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #119 from XDgov/fcsm-announcement
FCSM abstracts news item
- Loading branch information
Showing
2 changed files
with
17 additions
and
0 deletions.
There are no files selected for viewing
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
--- | ||
title: "xD to Present Four Papers at FCSM" | ||
publish_date: 2024-07-10 | ||
permalink: /news/fcsm-abstracts-announcement/ | ||
img_alt_text: FCSM Logo | ||
image: assets/img/news/xd-to-present-four-papers-at-fcsm.jpg | ||
image_accessibility: FCSM Logo | ||
--- | ||
<p> | ||
The xD team will present four papers at the upcoming <a class="usa-link" href="https://fcsmconf.org/" target="_blank">2024 Federal Committee on Statistical Methodology Research and Policy Conference</a> in October. xD's papers focus on the responsible implementation and oversight of AI systems, including mitigating AI bias by using explainable AI and causal learning, employing privacy enhancing technologies to enable third-party audits of algorithms, and harnessing Census data to better understand the societal impact of AI systems. See below for a short description of each paper. | ||
</p> | ||
<ul class="usa-list"> | ||
<li><p><span class="title">“A Semi-Supervised Active Learning Approach for Block-Status Classification”</span>: We have developed an AI/ML solution to improve both data labelling and classification of parcel data to enable new data-driven insight while reducing costs and effort for data assessment. Here we are using a combination of Explainable AI (XAI) and Causal Learning (CL) for bias identification with an active semi-supervised learning approach to ensure that model predictions are robust, fair, and trustworthy. The project aims to save approximately 800,000 hours of manual labeling and showcase Census' expertise and evolution in applying AI/ML in geographic data.</p></li> | ||
<li><p><span class="title">“Explainable Artificial Intelligence for Bias Identification and Mitigation in Demographic Models”</span>: In the language project with the Social, Economic, and Housing Statistics Division (SEHSD) of the Census Bureau, we highlight use case examples of XAI and Causal Learning to identify bias within demographic models and the datasets used for these models. Utilization of AI in demographic applications is increasingly vulnerable to bias scrutiny. The incorporation of causal learning and XAI as a “must have feature” in demographic use of AI will help alleviate problematic algorithmic bias and help inject much-needed transparency into a process that will otherwise remain shrouded in shadows.</p></li> | ||
<li><p><span class="title">“Official Statistics for Responsible AI: The Role of the Federal Statistical System in Enabling a More Accountable AI/ML ecosystem”</span>: As the country’s premier source of statistical information, the Federal Statistical System (FSS) is in a unique position to enable understanding of the social impacts of AI systems. In this policy paper, we propose three ways that FSS agencies can enhance the Responsible AI ecosystem: 1) data collection on societal impacts of AI through existing or novel survey products; 2) creation of computational tools and educational materials to encourage use of federal open data, including demographic data, for algorithm auditing and responsible AI practices; and 3) research across agencies on AI decision systems as drivers of social inequality, with a focus on employment, credit, housing, and healthcare.</p></li> | ||
<li><p><span class="title">“Enabling Third-Party Audits of Algorithmic Systems with Privacy Enhancing Technologies”</span>: In this short position paper, we explore Privacy Enhancing Technologies (PETs) as a potential enabling intermediary for third-party auditing of algorithmic systems. We assess the utility of two approaches: Differential Privacy (DP) and Secure Multiparty Computation (SMPC) and outline potential directions for further research into the practicality of using these techniques in third-party auditing methodologies.</p></li> | ||
</ul> |