This page is to collect resources around any subject regarding Responsible use of AI. This also includes governance and regulations such as the AI Act.
The page is open for anyone to create and add resources, we encourage individuals with an interest and experience in the topic to share resources and engage in discussions with the community on this page.
High level goals and purpose of this page are:
- To collect useful resource around the topic
- To get actors, both individuals and organizations, to find each other
- To share experiences and learn together on how to practically work with Responsible AI in organizations.
- Create engagement and discussions on the topic.
AI has the potential to transform whole industries as well as society at large. It is of the highest importance that this is done in a safe, lawful, and responsible way to make sure people are not harmed directly or indirectly through applied AI solutions, while also unlocking the potential that AI brings.
The pace of AI development is extremely high. This brings a strong need for paying close attention to the implications of adopting AI while also building trust on a societal level and over time. Each AI use case has unique risks and considerations that require context-specific approaches. We also need to support experimentation and exploration as this is where potential risks are concretized and addressed.
Moreover, due to the speed of development, we are not yet able to foresee all possible risks and opportunities with AI. It is therefore paramount that we build a strong AI ecosystem that is capable of applying responsible principles over time. A robust AI ecosystem requires national and international collaboration, active contribution and participation of a diverse group of stakeholders and domain experts, and sharing of information and best practices.
The adoption of AI spans many fields, both technical such as data management and infrastructure, and non-technical areas such as behavioral sciences and people management. Given this, a multi-disciplinary approach throughout the lifecycle of AI development and application will increase our understanding of how AI solutions will be the most useful without producing undesirable results, as well as the effect on society as a whole.
For organizations developing or using AI, we encourage using existing work done by organizations such as EU, NATO, OECD, and UNESCO, and would like to place additional emphasis on some specific perspectives.
Lawfulness and transparency: All AI solutions should be developed and used within the rule of the law and stakeholders should be held accountable accordingly. We strongly advocate for proportionate and reasonable transparency about how AI systems are developed and work. This involves communicating the capabilities and limitations of AI understandably to all stakeholders, including end users and citizens.
Ethical and fair: AI systems must be designed and operated in a manner that respects human rights, values, and cultural diversity. This includes mitigating risks with AI algorithms such as bias and discrimination. It is essential to consider ethical implications at every stage of AI development and deployment.
Safety and robustness: AI systems must be reliable, controllable, and secure. They should function as intended under various conditions and be resilient to manipulations and errors. This includes safeguarding against malicious use of AI technologies and ensuring that AI does not pose unintended harm to people or the environment.
2024-12-16 20:22
Linked Weblink
This white paper explores the development of AI agents – autonomous systems capable of sensing, learning and acting upon their environments. It looks at how they are linked to recent advances in large language and multimodal models and highlights how they can enhance efficiency across sectors such as healthcare, education and finance. It also discusses the benefits and risks, and the opportunities and challenges, associated with AI agents. |
This guidebook offers a window onto the activity of the etami consortium. It covers actionable guidelines to develop legal, trustworthy, and ethical Artificial Intelligence. The focus is put on quality-centric lifecycle models for AI systems, legal compliance, and AI auditing practices. etami – Ethical and Trustworthy Artificial and Machine Intelligence – is a non-profit organisation that works on making ethical AI principles actionable. etami understands that the best path towards trustworthy and ethical AI starts with quality-centric lifecycle models, processes involving all the stakeholders, and auditing practices. By developing systems with care while following a number of simple principles, it is straightforward to make such systems safe and trustworthy, and follow ethical and legal principles – even where these change. This handbook, of which you are only able to see a snapshot in time, lies down these principles. If you are researching, developing, or applying AI methodologies, this handbook is for you. |
2024-10-16 08:05
Linked Weblink
Today we are publishing a significant update to our Responsible Scaling Policy (RSP), the risk governance framework we use to mitigate potential catastrophic risks from frontier AI systems. |
Claims of AI outperforming medical practitioners are under scrutiny, as the evidence supporting many of these claims is not convincing or transparently reported. These claims often lack specificity, contextualization, and empirical grounding. In this comment, we offer constructive ethical guidance that can benefit authors, journal editors, and peer reviewers when reporting and evaluating findings in studies comparing AI to physician performance. The guidance provided here forms an essential addition to current reporting guidelines for healthcare studies using machine learning. |
2024-04-22 06:51
Linked Podcast
Listen to this episode from AI for UN on Spotify. Join us in a captivating exploration of the nuanced relationship between artificial intelligence and human rights within the United Nations. This episode of AI for UN features an in-depth discussion between Gemini and Claud 3 Opus—two advanced AI models—on the multifaceted impact of AI technologies. Together, they dissect the potential of AI to both uncover human rights abuses and introduce challenges such as privacy and algorithmic bias. Engage in a conversation that delves into the ethical, legal, and social implications of AI in global governance, and examine the crucial role the UN plays in aligning technological advancements with the core values of humanity. This episode is a must-listen for anyone interested in the intersection of technology, governance, and human rights—from UN staff to tech enthusiasts worldwide. |
2024-04-19 12:19
Linked Weblink
AI will soon touch all of our lives, deeply. Organisations must wield AI’s power with care, so they—and their stakeholders—can trust it as it transforms our world. |
2024-03-22 16:26
Linked Weblink
Generative AI poses both risks and opportunities. Here’s a road map to mitigate the former while moving to capture the latter from day one. |
2024-02-28 09:02
Linked Weblink
Rapport från Vinnova som beskriver hur AI kan bidra till jämställdhet. Viktig och ständigt aktuell läsning! Här hittar ni även en serie av rapporter publicerade av Vinnova: https://www.vinnova.se/m/jamstalld-innovation/rapporter-kring-jamstalldhet/ |
Responsible AI GuidelinesThere are many organizations that have done a very thoroughly and grounded work in the area to develop guidelines and frameworks. We encourage you yo always take a look at these before you start your own work, or just build on top of these: |
|
2024-10-14 08:02
Linked File (PDF, Word, PPT, etc)
Detta metodstöd har jag sammanställt utifrån min tolkning av EU:s AI-förordning och med Lycka till! |
2024-04-11 13:22
Linked Weblink
Research and innovation news alert: The Commission, together with the European Research Area countries and stakeholders, has put forward a set of guidelines to support the European research community in their responsible use of generative artificial intelligence (AI). |
De här riktlinjerna ska hjälpa lärare att förstå vilken potential användningen av AI-applikationer och data har inom utbildningen och öka medvetenheten om eventuella risker. |