I attended the "Ladies That UX" talk on AI and it was a truly enlightening experience and I thought it would be perfect for this blog entry. It made me reflect on how AI is transforming the way companies design, develop, and deploy products, but also how it forces us to think about ethics, accessibility, and real-world applications. One of the most intriguing aspects of the talk was the realisation that AI is not a universal solution—its success really depends on how it’s tailored to fit the needs of specific industries and user groups. This insight has prompted me to consider how AI might evolve and how we, as designers and researchers, can help shape that future.

Case Study: ESO and AI in EHR (Electronic Health Records)

One of the standout examples was ESO's use of AI in their Electronic Health Record (EHR) system. The company employs AI to generate narratives, streamlining the documentation process for healthcare providers. However, what struck me was the cost associated with using AI. It’s not a tool that can be used freely or for every entry; it’s an expensive resource that can only be deployed once per task. This made me reflect on a key question: How do we balance the high cost of AI with its potential benefits? In industries like healthcare, where time and efficiency are critical, AI seems like a natural fit, but its accessibility is compromised when it’s not sustainable for everyday use. Perhaps this calls for a deeper look at how we can optimise the cost-effectiveness of AI, particularly in resource-heavy industries.

AI Tools in Design: ChatGPT, Claude, and Miro AI

I was also fascinated by how various AI tools are making their way into the design process. These tools have become integral not just for streamlining workflows, but for helping designers think creatively and critically.

Accessibility Concerns: The Struggles of Older Products

A sobering point raised during the talk was the accessibility struggles in older products. Specifically, ESO's EHR system had issues with keyboard accessibility and colour contrast. These limitations are a reminder that while AI can be an incredibly powerful tool, it’s also crucial to prioritise accessibility at the foundational level. As we continue to evolve and innovate, AI alone will not solve problems that are deeply embedded in outdated or poorly designed systems. It left me reflecting: How often do we overlook these fundamental design issues in favour of new technologies like AI? If we’re not careful, we risk reinforcing systemic flaws with shiny new tools.

Rapid7: Efficiency, Ethics, and the Environmental Impact of AI

Rapid7’s application of AI was particularly thought-provoking in terms of ethics and efficiency. The company has integrated AI into processes such as writing proposals, creating surveys, and outlining user stories. While it improves team productivity, there’s an ethical cost to consider—chatbots consume significantly more electricity than websites, which brought up a critical issue: Are we mindful enough of AI’s environmental impact?

I found myself reflecting on how important it is to consider the sustainability of AI, especially as it becomes more ubiquitous. It’s not just about how we use AI but how we balance technological innovation with environmental responsibility. Should the energy consumption of AI be a key factor in product development? Rapid7’s attention to cybersecurity—ensuring AI can be used to counter AI-driven threats—also raises the question of how AI itself must adapt to constantly evolving challenges.

Trust and Transparency: Building Ethical AI

Another important point that resonated with me was the focus on trust and transparency. Users today expect transparency—they want to understand how AI works, how their data is being used, and whether the AI’s output can be trusted. I couldn't help but think how often companies overlook this aspect. It’s easy to get lost in the technical marvels of AI, but without clear communication, users may feel uneasy about its role in their lives. The trust users place in AI is fragile, and companies must work hard to ensure that their systems are not only effective but honest and transparent.

Designing AI with User Needs in Mind

One of the most profound insights from the talk was the emphasis on understanding user needs before integrating AI into products. AI must be a solution, not a novelty. Rapid7’s focus on designing their automated assistants to serve real user needs was a reminder that AI should not exist for the sake of innovation alone—it should always serve a real purpose. Reflecting on this, I realise that AI should always aim to improve the user experience, whether by increasing efficiency, providing better insights, or solving a specific pain point.

The real question that’s left with me is: How often do we design AI tools with the user’s needs truly at the core? And how can we ensure that we’re not just driven by the allure of technology but focused on its practical, human-centred application?

<aside> 👩🏻‍🦱

As I reflect on the insights from the talk, it becomes clear that AI is both a tool of immense potential and a source of new challenges. It’s an exciting time, but it also requires us to ask deeper questions about its sustainability, accessibility, ethics, and transparency. What struck me most is how AI is not just about technology; it’s about people. Whether we are creating tools for healthcare, design, or cybersecurity, we must always ask: How does this technology truly serve the people it’s meant to help? The role of AI in design and research will continue to evolve, but our responsibility as creators is to ensure that it evolves in ways that are ethical, transparent, and user-focused. As I look ahead, I’m excited to explore how AI can be better integrated into the work I do, but I also feel a deeper responsibility to ensure its human-centred development.

</aside>