{"id":1851,"date":"2026-01-15T17:12:59","date_gmt":"2026-01-15T16:12:59","guid":{"rendered":"https:\/\/laurenswaling.com\/?p=1851"},"modified":"2026-01-15T17:13:00","modified_gmt":"2026-01-15T16:13:00","slug":"responsible-ai-sometimes-means-moving-forward-not-slowing-down","status":"publish","type":"post","link":"https:\/\/laurenswaling.com\/?p=1851&lang=en","title":{"rendered":"Responsible AI Sometimes Means Moving Forward, Not Slowing Down"},"content":{"rendered":"\n<p id=\"ember599\">Responsible AI is often discussed as if artificial intelligence is mainly a risk that needs to be controlled. In education, policy and organisations, the conversation usually starts from the same assumption: AI is dangerous, biased or unreliable, so we should be careful.<\/p>\n\n\n\n<p id=\"ember600\">But this misses a key question:<\/p>\n\n\n\n<p id=\"ember601\"><strong>Is it sometimes irresponsible not to use AI?<\/strong><\/p>\n\n\n\n<p id=\"ember602\">That question is rarely asked. And if we care about ethics, it should be central.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"ember603\">What current research on Responsible AI focuses on<\/h3>\n\n\n\n<p id=\"ember604\">This week I learned about recent research from <a href=\"https:\/\/www.linkedin.com\/in\/maaike-harbers-2a2a404\/\">Maaike Harbers<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/mmmpeeters\/\">Marieke Peeters<\/a> <a href=\"https:\/\/www.linkedin.com\/in\/francien-dechesne-1b89685\/\">Francien Dechesne \ud83d\udfe5<\/a> <a href=\"https:\/\/www.linkedin.com\/in\/m-birna-van-riemsdijk-%F0%9F%9F%A5-1254726\/\">M. Birna van Riemsdijk \ud83d\udfe5<\/a> and <a href=\"https:\/\/www.linkedin.com\/in\/pascalwiggers\/\">Pascal Wiggers<\/a> into <a href=\"https:\/\/aic4nl.nl\/wp-content\/uploads\/2025\/12\/TRAI-document.pdf\">responsible use of AI in universities and universities of applied sciences<\/a>. Nice piece of work.<\/p>\n\n\n\n<p id=\"ember610\">It shows familiar patterns:<\/p>\n\n\n\n<ul>\n<li>AI ethics is addressed, but often in a fragmented way<\/li>\n\n\n\n<li>Integration across the curriculum is preferred over separate courses<\/li>\n\n\n\n<li>Content depends heavily on a few motivated lecturers<\/li>\n\n\n\n<li>Long-term institutional commitment is often missing<\/li>\n<\/ul>\n\n\n\n<p id=\"ember612\">The recommendations make sense: better integration, shared resources, professional development, multidisciplinary perspectives, and clear governance.<\/p>\n\n\n\n<p id=\"ember613\">But underneath many of these discussions lies an unspoken assumption:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote\">\n<p><strong>AI is a problem first, and an opportunity second.<\/strong><\/p>\n<\/blockquote>\n\n\n\n<p id=\"ember615\">That framing deserves scrutiny.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"ember616\">Ethics is about making choices, not following rules<\/h3>\n\n\n\n<p id=\"ember617\">Ethics is not a checklist. It is not about applying fixed rules without thinking. Ethics is about weighing options, understanding context, and making choices you can explain and defend.<\/p>\n\n\n\n<p id=\"ember618\">And sometimes, the responsible choice is to act.<\/p>\n\n\n\n<p id=\"ember619\">Responsible choices do not automatically slow down innovation. In some cases, responsibility requires moving forward rather than holding back.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"ember620\">Humans are not neutral either<\/h3>\n\n\n\n<p id=\"ember621\">Many debates compare AI to an idealised version of human decision-making: fair, rational and unbiased. Reality looks very different.<\/p>\n\n\n\n<p id=\"ember622\">We know from decades of research that humans regularly discriminate in hiring and selection. Often unconsciously. Often inconsistently. Often without being able to explain their decisions.<\/p>\n\n\n\n<p id=\"ember623\">Well-designed AI systems, on the other hand:<\/p>\n\n\n\n<ul>\n<li>Make mistakes, but in a consistent way<\/li>\n\n\n\n<li>Can be tested, audited and improved<\/li>\n\n\n\n<li>Can be designed to ignore irrelevant personal characteristics<\/li>\n\n\n\n<li>Can focus on skills, experience and context rather than gut feeling<\/li>\n<\/ul>\n\n\n\n<p id=\"ember625\">AI is not perfect. But pretending that humans are is not ethical either.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"ember626\">When using AI can be the responsible choice<\/h3>\n\n\n\n<p id=\"ember627\">A serious discussion about responsible AI asks different questions:<\/p>\n\n\n\n<ul>\n<li>What happens if we leave this decision to humans alone?<\/li>\n\n\n\n<li>What kinds of mistakes do they consistently make?<\/li>\n\n\n\n<li>Can careful use of AI reduce those mistakes?<\/li>\n\n\n\n<li>Is not acting also a choice, with real consequences?<\/li>\n<\/ul>\n\n\n\n<p id=\"ember629\">In areas like work and employment, there is growing evidence that carefully designed matching systems (take a look at <a href=\"https:\/\/www.linkedin.com\/company\/8vance\/\">8vance<\/a>) can reduce bias and improve access to opportunities. Not despite technology, but because of it.<\/p>\n\n\n\n<p id=\"ember630\">This does not require blind trust in AI. It requires evidence, transparency and continuous evaluation.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"ember631\">From risk avoidance to responsible action<\/h3>\n\n\n\n<p id=\"ember632\">For education, this means expanding how we teach responsible AI:<\/p>\n\n\n\n<ul>\n<li>Not only where AI can fail, but where humans fail as well<\/li>\n\n\n\n<li>Not only rules and compliance, but responsible decision-making in context<\/li>\n\n\n\n<li>Not only when to slow down, but when moving forward is justified<\/li>\n<\/ul>\n\n\n\n<p id=\"ember634\">That demands research, real-world cases and honest debate. Including uncomfortable questions.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"ember635\">In conclusion<\/h3>\n\n\n\n<p id=\"ember636\">Responsible AI is not about being for or against technology. It is about making better, well-reasoned choices.<\/p>\n\n\n\n<p id=\"ember637\">Sometimes caution is necessary. Sometimes inaction causes harm. And sometimes using AI responsibly is the most ethical option available.<\/p>\n\n\n\n<p id=\"ember638\">That nuance deserves a central place in the conversation. Especially in education. Especially now.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Responsible AI is often discussed as if artificial intelligence is mainly a risk that needs to be controlled. In education, policy and organisations, the conversation usually starts from the same assumption: AI is dangerous, biased or unreliable, so we should be careful. But this misses a key question: Is it sometimes irresponsible not to use [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":1853,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"om_disable_all_campaigns":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"_uf_show_specific_survey":0,"_uf_disable_surveys":false,"footnotes":""},"categories":[1,32,17],"tags":[],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/laurenswaling.com\/index.php?rest_route=\/wp\/v2\/posts\/1851"}],"collection":[{"href":"https:\/\/laurenswaling.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/laurenswaling.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/laurenswaling.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/laurenswaling.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1851"}],"version-history":[{"count":2,"href":"https:\/\/laurenswaling.com\/index.php?rest_route=\/wp\/v2\/posts\/1851\/revisions"}],"predecessor-version":[{"id":1854,"href":"https:\/\/laurenswaling.com\/index.php?rest_route=\/wp\/v2\/posts\/1851\/revisions\/1854"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/laurenswaling.com\/index.php?rest_route=\/wp\/v2\/media\/1853"}],"wp:attachment":[{"href":"https:\/\/laurenswaling.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1851"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/laurenswaling.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1851"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/laurenswaling.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1851"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}