<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://ricardo.chavarriaga.me/feed.xml" rel="self" type="application/atom+xml" /><link href="https://ricardo.chavarriaga.me/" rel="alternate" type="text/html" /><updated>2026-04-16T13:06:49+00:00</updated><id>https://ricardo.chavarriaga.me/feed.xml</id><title type="html">Ricardo Chavarriaga</title><subtitle>Ricardo Chavarriaga</subtitle><author><name>Ricardo Chavarriaga</name></author><entry><title type="html">A survey of artificial intelligence risk assessment methodologies</title><link href="https://ricardo.chavarriaga.me/report/A-survey-of-artificial-intelligence-risk/" rel="alternate" type="text/html" title="A survey of artificial intelligence risk assessment methodologies" /><published>2023-11-22T00:00:00+00:00</published><updated>2023-11-22T00:00:00+00:00</updated><id>https://ricardo.chavarriaga.me/report/A-survey-of-artificial-intelligence-risk</id><content type="html" xml:base="https://ricardo.chavarriaga.me/report/A-survey-of-artificial-intelligence-risk/"><![CDATA[<p>This report was commissioned to (1) inform policy makers and regulatory stakeholders about noteworthy approaches to AI risk assessment, including leading practices, and (2) to inform rulemaking on AI risk assessment. The survey covers: legal and regulatory approaches, current work at international bodies, work by standards development organizations, industry approaches and prominent approaches proposed in civil-society and academic literature.</p>

<p><a href="https://www.trilateralresearch.com/wp-content/uploads/2022/01/A-survey-of-AI-Risk-Assessment-Methodologies-full-report.pdf">Link</a></p>

<p>Source: <a href="https://www.trilateralresearch.com/">Ernest &amp; Young LLP - Trilateral Research</a></p>

<table>
  <tbody>
    <tr>
      <td>Ethical Principles: Multiple Ethical Dimentions</td>
    </tr>
  </tbody>
</table>

<table>
  <tbody>
    <tr>
      <td>SDGs: N/A</td>
    </tr>
  </tbody>
</table>]]></content><author><name>Ricardo Chavarriaga</name></author><category term="Report" /><category term="Artificial Intelligence" /><category term="Other Technologies" /><category term="AI governance" /><category term="Ethics and Impact Assessment" /><category term="Policy Making" /><category term="General" /><summary type="html"><![CDATA[This report was commissioned to (1) inform policy makers and regulatory stakehol (...)]]></summary></entry><entry><title type="html">AI Asessment catalog</title><link href="https://ricardo.chavarriaga.me/assessment%20tool/AI-Asessment-catalog/" rel="alternate" type="text/html" title="AI Asessment catalog" /><published>2023-11-22T00:00:00+00:00</published><updated>2023-11-22T00:00:00+00:00</updated><id>https://ricardo.chavarriaga.me/assessment%20tool/AI-Asessment-catalog</id><content type="html" xml:base="https://ricardo.chavarriaga.me/assessment%20tool/AI-Asessment-catalog/"><![CDATA[<p>The AI assessment catalog of Fraunhofer IAIS offers a structured guideline that can be used to concretize abstract quality standards into application-specific assessment criteria.</p>

<p><a href="https://www.iais.fraunhofer.de/en/research/artificial-intelligence/ai-assessment-catalog.html">Link</a></p>

<p>Source: <a href="https://www.iais.fraunhofer.de">Fraunhofer Institute</a></p>

<table>
  <tbody>
    <tr>
      <td>Ethical Principles: Multiple Ethical Dimentions</td>
    </tr>
  </tbody>
</table>

<table>
  <tbody>
    <tr>
      <td>SDGs: N/A</td>
    </tr>
  </tbody>
</table>]]></content><author><name>Ricardo Chavarriaga</name></author><category term="Assessment Tool" /><category term="Artificial Intelligence" /><category term="Other Technologies" /><category term="AI governance" /><category term="Ethics and Impact Assessment" /><category term="Policy Making" /><category term="General" /><summary type="html"><![CDATA[The AI assessment catalog of Fraunhofer IAIS offers a structured guideline that (...)]]></summary></entry><entry><title type="html">AI Ethics Reading List</title><link href="https://ricardo.chavarriaga.me/suggested%20readings/AI-Ethics-Reading-List/" rel="alternate" type="text/html" title="AI Ethics Reading List" /><published>2023-11-22T00:00:00+00:00</published><updated>2023-11-22T00:00:00+00:00</updated><id>https://ricardo.chavarriaga.me/suggested%20readings/AI-Ethics-Reading-List</id><content type="html" xml:base="https://ricardo.chavarriaga.me/suggested%20readings/AI-Ethics-Reading-List/"><![CDATA[<p>This is a compilation of books, papers, and resources that AI Ethicists recommend to help you manage your AI initiatives responsibly or to in general get to know AI ethics better. Thanks to all who have helped compile the list. Please consider this a living, ever-evolving list as new AI Ethics works come forward.</p>

<p><a href="https://www.aitruth.org/aiethics-readinglist/">Link</a></p>

<p>Source: <a href="https://www.aitruth.org/about">AI Truth</a></p>

<table>
  <tbody>
    <tr>
      <td>Ethical Principles: Multiple Ethical Dimentions</td>
    </tr>
  </tbody>
</table>

<table>
  <tbody>
    <tr>
      <td>SDGs: 17. Partnership for the Goals</td>
    </tr>
  </tbody>
</table>]]></content><author><name>Ricardo Chavarriaga</name></author><category term="Suggested Readings" /><category term="Artificial Intelligence" /><category term="Other Technologies" /><category term="Ethics and Impact Assessment" /><category term="AI governance" /><category term="General" /><category term="Policy Making" /><summary type="html"><![CDATA[This is a compilation of books, papers, and resources that AI Ethicists recommen (...)]]></summary></entry><entry><title type="html">AI Incident Database</title><link href="https://ricardo.chavarriaga.me/catalog/AI-Incident-Database/" rel="alternate" type="text/html" title="AI Incident Database" /><published>2023-11-22T00:00:00+00:00</published><updated>2023-11-22T00:00:00+00:00</updated><id>https://ricardo.chavarriaga.me/catalog/AI-Incident-Database</id><content type="html" xml:base="https://ricardo.chavarriaga.me/catalog/AI-Incident-Database/"><![CDATA[<p>The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.</p>

<p><a href="https://incidentdatabase.ai">Link</a></p>

<p>Source: <a href="https://incidentdatabase.ai">The Responsible AI Collaborative</a></p>

<table>
  <tbody>
    <tr>
      <td>Ethical Principles: Transparency</td>
      <td>Accountability</td>
    </tr>
  </tbody>
</table>

<table>
  <tbody>
    <tr>
      <td>SDGs: 17. Partnership for the Goals</td>
    </tr>
  </tbody>
</table>]]></content><author><name>Ricardo Chavarriaga</name></author><category term="Catalog" /><category term="Artificial Intelligence" /><category term="Other Technologies" /><category term="Ethics and Impact Assessment" /><category term="Technology Landscape" /><category term="General" /><category term="Policy Making" /><summary type="html"><![CDATA[The AI Incident Database is dedicated to indexing the collective history of harm (...)]]></summary></entry><entry><title type="html">AI Standards Hub</title><link href="https://ricardo.chavarriaga.me/standard/AI-Standards-Hub/" rel="alternate" type="text/html" title="AI Standards Hub" /><published>2023-11-22T00:00:00+00:00</published><updated>2023-11-22T00:00:00+00:00</updated><id>https://ricardo.chavarriaga.me/standard/AI-Standards-Hub</id><content type="html" xml:base="https://ricardo.chavarriaga.me/standard/AI-Standards-Hub/"><![CDATA[<p>The new home of the AI standards community. Dedicated to knowledge sharing, capacity building, and world-leading research, the Hub aims to build a vibrant and diverse community around AI standards.</p>

<p><a href="https://aistandardshub.org">Link</a></p>

<p>Source: <a href="https://www.turing.ac.uk/">Allan Turing Institute</a></p>

<table>
  <tbody>
    <tr>
      <td>Ethical Principles: Multiple Ethical Dimentions</td>
    </tr>
  </tbody>
</table>

<table>
  <tbody>
    <tr>
      <td>SDGs: N/A</td>
    </tr>
  </tbody>
</table>]]></content><author><name>Ricardo Chavarriaga</name></author><category term="Standard" /><category term="Artificial Intelligence" /><category term="Other Technologies" /><category term="AI governance" /><category term="Ethics and Impact Assessment" /><category term="Policy Making" /><category term="General" /><summary type="html"><![CDATA[The new home of the AI standards community. Dedicated to knowledge sharing, capa (...)]]></summary></entry><entry><title type="html">AI for Cybersecurity and Cybercrime - How Artificial Intelligence Is Battling Itself</title><link href="https://ricardo.chavarriaga.me/suggested%20readings/AI-for-Cybersecurity-and-Cybercrime-Ho/" rel="alternate" type="text/html" title="AI for Cybersecurity and Cybercrime - How Artificial Intelligence Is Battling Itself" /><published>2023-11-22T00:00:00+00:00</published><updated>2023-11-22T00:00:00+00:00</updated><id>https://ricardo.chavarriaga.me/suggested%20readings/AI-for-Cybersecurity-and-Cybercrime---Ho</id><content type="html" xml:base="https://ricardo.chavarriaga.me/suggested%20readings/AI-for-Cybersecurity-and-Cybercrime-Ho/"><![CDATA[<p>Experts believe that Artificial Intelligence (AI) and Machine Learning (ML) have both negative and positive effects on cybersecurity. AI algorithms use training data to learn how to respond to different situations. They learn by copying and adding additional information as they go along. This article reviews the positive and the negative impacts of AI on cybersecurity.</p>

<p><a href="https://www.computer.org/publications/tech-news/trends/ai-fighting-ai">Link</a></p>

<p>Source: <a href="https://www.computer.org/publications/tech-news/trends">IEEE Computer Society</a></p>

<table>
  <tbody>
    <tr>
      <td>Ethical Principles: Robustness and Safety</td>
      <td>Privacy and Data Governance</td>
    </tr>
  </tbody>
</table>

<table>
  <tbody>
    <tr>
      <td>SDGs: N/A</td>
    </tr>
  </tbody>
</table>]]></content><author><name>Ricardo Chavarriaga</name></author><category term="Suggested Readings" /><category term="Artificial Intelligence" /><category term="Cyber Technologies" /><category term="Cybersecurity" /><category term="Artificial Intelligence" /><category term="Security" /><category term="Security" /><summary type="html"><![CDATA[Experts believe that Artificial Intelligence (AI) and Machine Learning (ML) have (...)]]></summary></entry><entry><title type="html">AI regulation: a pro-innovation approach</title><link href="https://ricardo.chavarriaga.me/report/AI-regulation-a-pro-innovation-approach/" rel="alternate" type="text/html" title="AI regulation: a pro-innovation approach" /><published>2023-11-22T00:00:00+00:00</published><updated>2023-11-22T00:00:00+00:00</updated><id>https://ricardo.chavarriaga.me/report/AI-regulation--a-pro-innovation-approach</id><content type="html" xml:base="https://ricardo.chavarriaga.me/report/AI-regulation-a-pro-innovation-approach/"><![CDATA[<p>This white paper details our plans for implementing a pro-innovation approach to AI regulation. We’re seeking views through a supporting consultation.</p>

<p><a href="https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach">Link</a></p>

<p>Source: <a href="https://www.gov.uk/">UK Department for Science, Innovation and Technology</a></p>

<table>
  <tbody>
    <tr>
      <td>Ethical Principles: Multiple Ethical Dimentions</td>
    </tr>
  </tbody>
</table>

<table>
  <tbody>
    <tr>
      <td>SDGs: N/A</td>
    </tr>
  </tbody>
</table>]]></content><author><name>Ricardo Chavarriaga</name></author><category term="Report" /><category term="Artificial Intelligence" /><category term="Other Technologies" /><category term="AI governance" /><category term="Ethics and Impact Assessment" /><category term="Policy Making" /><category term="General" /><summary type="html"><![CDATA[This white paper details our plans for implementing a pro-innovation approach to (...)]]></summary></entry><entry><title type="html">Atlas of Automation Switzerland</title><link href="https://ricardo.chavarriaga.me/catalog/Atlas-of-Automation-Switzerland/" rel="alternate" type="text/html" title="Atlas of Automation Switzerland" /><published>2023-11-22T00:00:00+00:00</published><updated>2023-11-22T00:00:00+00:00</updated><id>https://ricardo.chavarriaga.me/catalog/Atlas-of-Automation-Switzerland</id><content type="html" xml:base="https://ricardo.chavarriaga.me/catalog/Atlas-of-Automation-Switzerland/"><![CDATA[<p>The Atlas of Automation aims to shed light into this black box. It offers a first directory of examples of algorithmic systems that are used in Switzerland, whether by government agencies or the private sector. It focuses on algorithmic systems that are used in decision-making, thus to predict, recommend, affect or take decisions about human beings, or that generate content used by or on human beings. It does not aim to be comprehensive but rather illustrates the variety of use cases, through which algorithms affect us and our society.</p>

<p><a href="https://algorithmwatch.ch/en/atlas/">Link</a></p>

<p>Source: <a href="https://algorithmwatch.org/en/">AlgorithmWatch CH</a></p>

<table>
  <tbody>
    <tr>
      <td>Ethical Principles: Accountability</td>
      <td>Transparency</td>
    </tr>
  </tbody>
</table>

<table>
  <tbody>
    <tr>
      <td>SDGs: 17. Partnership for the Goals</td>
      <td>16. Peace Justice and Strong Institutions</td>
    </tr>
  </tbody>
</table>]]></content><author><name>Ricardo Chavarriaga</name></author><category term="Catalog" /><category term="Artificial Intelligence" /><category term="Other Technologies" /><category term="AI governance" /><category term="Ethics and Impact Assessment" /><category term="General" /><category term="Policy Making" /><summary type="html"><![CDATA[The Atlas of Automation aims to shed light into this black box. It offers a firs (...)]]></summary></entry><entry><title type="html">CDEI portfolio of AI assurance techniques</title><link href="https://ricardo.chavarriaga.me/catalog/CDEI-portfolio-of-AI-assurance-technique/" rel="alternate" type="text/html" title="CDEI portfolio of AI assurance techniques" /><published>2023-11-22T00:00:00+00:00</published><updated>2023-11-22T00:00:00+00:00</updated><id>https://ricardo.chavarriaga.me/catalog/CDEI-portfolio-of-AI-assurance-technique</id><content type="html" xml:base="https://ricardo.chavarriaga.me/catalog/CDEI-portfolio-of-AI-assurance-technique/"><![CDATA[<p>This page provides details about the CDEI portfolio of AI assurance techniques and how to use it.</p>

<p><a href="https://www.gov.uk/guidance/cdei-portfolio-of-ai-assurance-techniques">Link</a></p>

<p>Source: <a href="https://www.gov.uk/">UK Department for Science, Innovation and Technology</a></p>

<table>
  <tbody>
    <tr>
      <td>Ethical Principles: Multiple Ethical Dimentions</td>
    </tr>
  </tbody>
</table>

<table>
  <tbody>
    <tr>
      <td>SDGs: N/A</td>
    </tr>
  </tbody>
</table>]]></content><author><name>Ricardo Chavarriaga</name></author><category term="Catalog" /><category term="Artificial Intelligence" /><category term="Other Technologies" /><category term="AI governance" /><category term="Ethics and Impact Assessment" /><category term="General" /><category term="General" /><summary type="html"><![CDATA[This page provides details about the CDEI portfolio of AI assurance techniques a (...)]]></summary></entry></feed>