Skip to content

Latest commit

 

History

History
110 lines (65 loc) · 7.47 KB

Ontario AI Transparency feedback.md

File metadata and controls

110 lines (65 loc) · 7.47 KB

What we’ve heard so far/ Ce que nous avons entendu jusqu’à maintenant

Feedback is only available in the language it was received. It has been summarized to anonymize the contributors and focus on constructive notes and suggestions.

Les commentaires sont uniquement disponibles dans la langue dans laquelle ils ont été reçus. Il a été résumé pour anonymiser les contributeurs et se concentrer sur des notes et suggestions constructives.

General feedback |Commentaires générales

Transparency Guidelines | Lignes directrices sur la transparence

Ethical Principles | Principes pour une utilisation éthique

General feedback | Commentaires générales

Memo from the Law Commission of Ontario

There is some overlap between the AI Principles and the Transparency Principles. While there don’t appear to be contradictions between them, wondering why they are kept separate?

Safes framework

Transparency Guidelines | Lignes directrices sur la transparence

1 Identify Data Enhanced Decisions | Identifier les décisions fondées sur des données

In the “why it matters” section, it may be helpful to further identify why checklists are considered for these guidelines, beyond the need for auditing; for example in some cases these processes are circumscribing the discretion of decision makers.

2 Keep People in Focus and in the Loop | Garder les personnes à l’esprit et les tenir au courant

Concerning the statement “Where the government makes decisions, they should be done made with people/stakeholders rather than on their behalf” --- The section “How to follow this guideline” clarified that stakeholders should be consulted in the design of the decision process, not necessarily in each individual decision. Suggest rewording as “Where the government designs decisional processes ...”.

3 Provide Public Notice | Aviser le public et fournir des canaux de communication clairs

The how-to says that notice should be available “through familiar channels”. There’s a risk that this will be interpreted as the web, and thus will only be available on that medium. Recommend strengthening this to “on all service-delivery channels in use”. People can access some services through a variety of means: telephone, in-person, web, mail. Clients should be given the same consideration regardless of their choice of channel to access the service. A simple note that decisions are automated and how to seek the details should be available wherever a service is accessed.

4 Assess Expectations and Outcomes | Évaluer les attentes et les résultats

This section could include a list of principles to be assessed. As a starting point, consider:

  • Transparency of the process or algorithm: how easily it can be explained.

  • Fairness of the process: are all groups treated equally, are there biases in the decisions?

  • Assessing the duration and reversibility of the impacts, and rights affected (freedom, health, economic, environment, etc.)

  • The quality, quantity and relevance of the data to inform or train the system.

  • The suitability of the procedures in place to monitor the system and its decisions, as well as the recourse mechanism to challenge outcomes.

  • The impacts on privacy.

5 Allow Meaningful Access | Permettre un accès véritable

As you state, “very few people may have the skills, resources [...] to analyze [...] the technical elements of the AI, tool, algorithm”. For large complex systems, even with full transparency and disclosure, a meaningful review may never take place. Providing access is not sufficient.

The “Peer Review” mechanism from the Canadian Directive on Automated Decision-Making could be considered to supplement what is being proposed here. The idea is that by requiring a 3rd party review of the system for systems that present a higher level of risk, impacted users are guaranteed an evaluation that would not otherwise have taken place due to lack of resources.

6 Describe Related Data | Décrire les données connexes

As machines begin to learn how to learn” seems like an odd statement. Humans are developing machine-learning algorithms. Recommend to rephrase it as “When machines learn from data, they have the power to amplify…”

to make these biases seem more credible”. What is meant by this. Is it that decisions made by computers may give a false sense of impartiality?

Suggest adding bullet such as

  • Processes to continually monitor and evaluate decisions made by the system for unintended outcomes and biases. (supporting the intro statements “Data used to train machines needs to be assessed for bias continually” and “to understand the strengths and weaknesses of the outcomes”)

This principle suggests investigating biases within the data but provides no examples of the biases or how they might be found. As a starting point, it could define biased data as not adequately representing certain subgroups of a population, or similar.

7 Support Rules, Requirements and Reporting | Renforcer les règles, exigences et la reddition de comptes

The how-to requests that the alignment with ethical frameworks be documented. The release of this documentation should be encouraged, for example by rewording as “Document and make publicly available how the use of …”.

Another bullet which could be added is below, adapted from the Digital Nations AI Principles:

  • The clear user need and public benefit

8 Update Regularly | Mise à jour régulière

This principle would add more value if a timeline of updates was provided, e.g. on an annual basis, each time the system receives a major update or when its model gets retrained. Alternatively, guidelines on the frequency could be given, for example that it should be commensurate to the risks/importance of the decision and the frequency at which the model is updated.