This page is to collect resources around any subject regarding Responsible use of AI. This also includes governance and regulations such as the AI Act.

The page is open for anyone to create and add resources, we encourage individuals with an interest and experience in the topic to share resources and engage in discussions with the community on this page.

High level goals and purpose of this page are:

  • To collect useful resource around the topic
  • To get actors, both individuals and organizations, to find each other
  • To share experiences and learn together on how to practically work with Responsible AI in organizations.
  • Create engagement and discussions on the topic.
AI has the potential to transform whole industries as well as society at large. It is of the highest importance that this is done in a safe, lawful, and responsible way to make sure people are not harmed directly or indirectly through applied AI solutions, while also unlocking the potential that AI brings. 

The pace of AI development is extremely high. This brings a strong need for paying close attention to the implications of adopting AI while also building trust on a societal level and over time. Each AI use case has unique risks and considerations that require context-specific approaches. We also need to support experimentation and exploration as this is where potential risks are concretized and addressed. 

Moreover, due to the speed of development, we are not yet able to foresee all possible risks and opportunities with AI. It is therefore paramount that we build a strong AI ecosystem that is capable of applying responsible principles over time. A robust AI ecosystem requires national and international collaboration, active contribution and participation of a diverse group of stakeholders and domain experts, and sharing of information and best practices.

The adoption of AI spans many fields, both technical such as data management and infrastructure, and non-technical areas such as behavioral sciences and people management. Given this, a multi-disciplinary approach throughout the lifecycle of AI development and application will increase our understanding of how AI solutions will be the most useful without producing undesirable results, as well as the effect on society as a whole. 

For organizations developing or using AI, we encourage using existing work done by organizations such as EU, NATO, OECD, and UNESCO, and would like to place additional emphasis on some specific perspectives.

Lawfulness and transparency: All AI solutions should be developed and used within the rule of the law and stakeholders should be held accountable accordingly. We strongly advocate for proportionate and reasonable transparency about how AI systems are developed and work. This involves communicating the capabilities and limitations of AI understandably to all stakeholders, including end users and citizens.

Ethical and fair: AI systems must be designed and operated in a manner that respects human rights, values, and cultural diversity. This includes mitigating risks with AI algorithms such as bias and discrimination. It is essential to consider ethical implications at every stage of AI development and deployment.

Safety and robustness: AI systems must be reliable, controllable, and secure. They should function as intended under various conditions and be resilient to manipulations and errors. This includes safeguarding against malicious use of AI technologies and ensuring that AI does not pose unintended harm to people or the environment.

pages

Resources

2024-04-22 06:51 Linked Podcast
Listen to this episode from AI for UN on Spotify. Join us in a captivating exploration of the nuanced relationship between artificial intelligence and human rights within the United Nations. This episode of AI for UN features an in-depth discussion between Gemini and Claud 3 Opus—two advanced AI models—on the multifaceted impact of AI technologies. Together, they dissect the potential of AI to both uncover human rights abuses and introduce challenges such as privacy and algorithmic bias. Engage in a conversation that delves into the ethical, legal, and social implications of AI in global governance, and examine the crucial role the UN plays in aligning technological advancements with the core values of humanity. This episode is a must-listen for anyone interested in the intersection of technology, governance, and human rights—from UN staff to tech enthusiasts worldwide.
2024-04-19 12:19 Linked Weblink
AI will soon touch all of our lives, deeply. Organisations must wield AI’s power with care, so they—and their stakeholders—can trust it as it transforms our world.
2024-03-22 16:26 Linked Weblink
Generative AI poses both risks and opportunities. Here’s a road map to mitigate the former while moving to capture the latter from day one.
2024-02-28 09:02 Linked Weblink
Rapport från Vinnova som beskriver hur AI kan bidra till jämställdhet. Viktig och ständigt aktuell läsning!

Här hittar ni även en serie av rapporter publicerade av Vinnova: https://www.vinnova.se/m/jamstalld-innovation/rapporter-kring-jamstalldhet/

2024-02-09 07:53 Linked Weblink
Consortium includes more than 200 leading AI stakeholders and will support the U.S. AI Safety Institute at NIST.

Today, U.S. Secretary of Commerce Gina Raimondo announced the creation of the U.S. AI Safety InstituteConsortium (AISIC), which will unite AI creators and users, academics, government and industry researchers, and civil society organizations in support of the development and deployment of safe and trustworthy artificial intelligence (AI). The consortium will be housed under the U.S. AI Safety Institute (USAISI) and will contribute to priority actions outlined in President Biden’s landmark Executive Order, including developing guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content.

2024-02-05 19:01 Linked Weblink
A site that collect and presents all the worlds various AI regulations. I imagine it can be very useful for AI startups.
There is a gap between organisations’ stated intentions and actual behaviours when it comes to ethical action in relation to AI. Nathan Kinch offers three reasons for this disconnection and some suggestions for how to close the gap and ‘make the world a better place’.
2024-01-25 15:44 Linked Post
In today's rapidly evolving technological landscape, responsible and ethical adoption of artificial intelligence (AI) is paramount for commercial enterprises. The exponential growth of the global AI market highlights the need for establishing standards and frameworks to ensure responsible AI practices and procurement. To address this crucial gap, the World Economic Forum, in collaboration with GEP, presents a comprehensive guide for commercial organizations.

Attributes

Data, Execution, Competence & Expertise, Organization, Technology
Sustainability

Responsible AI Guidelines

There are many organizations that have done a very thoroughly and grounded work in the area to develop guidelines and frameworks. We encourage you yo always take a look at these before you start your own work, or just build on top of these:

  • EU - Ethics guidelines for trustworthy AI
  • NATO - An Artificial Intelligence Strategy for NATO
  • OECD - OECD AI Principles overview
  • UNESCO - Recommendation on the Ethics of Artificial Intelligence
2024-04-11 13:22 Linked Weblink
Research and innovation news alert: The Commission, together with the European Research Area countries and stakeholders, has put forward a set of guidelines to support the European research community in their responsible use of generative artificial intelligence (AI).
De här riktlinjerna ska hjälpa lärare att förstå vilken potential användningen av AI-applikationer och data har inom utbildningen och öka medvetenheten om eventuella risker.