AI Design Tools Research
Evaluating how AI fits into real design and development workflows

Project Overview
During my internship at Nokia's Common Software Foundation UX Research team, I worked on a research project evaluating whether AI design tools could fit into how the UX team already works spanning across four use cases. This case study focuses on one of those use cases, UI testing and QA, where we conducted the deepest research and informed subsequent work to ultimately recommend tools.
My Impact
I worked across the full research cycle for this use case, from shaping the research questions to turning findings into recommendations. That included conducting and analyzing user interviews, co-designing and distributing a collaboration survey, mapping workflows across product teams, identifying where things were breaking down, and benchmarking tools against what our users actually needed.
Team
Anmol Sekhon
Márcia Marranita
Francisco Zenha Preto
Platform
Figma
Microsoft Teams
Microsoft Forms
Microsoft PowerPoint
Timeline
Aug - Sept 2025
2 Months
Deliverables
Research Report
Collaboration Survey
Workflow Mapping
Tool Benchmark
Goal & Target Audience
As part of the broader initiative to evaluate AI design tools, this use case focused on understanding how designers and developers approach UI testing and QA. We aimed to identify friction points in the process and assess whether AI tools could meaningfully reduce manual effort. Our primary audience included product and UX designers, frontend developers, and the Nokia Design System (NDS) team. These groups collaborate closely during handoff and implementation, yet their workflows and expectations were very different.
Research Timeline
This AI tools research initiative spanned multiple use cases, but this case study focuses on UI testing and quality organization. Other use cases are excluded to keep the narrative focused and within NDA scope.

Discovery
Defining Research Questions
We focused our research on four practical goals: defining how teams currently test UI, identifying handoff friction, isolating manual tasks, and finding where AI could actually help. Using these four areas, we defined initial research questions to guide and plan our research. Through initial probing, we quickly realized the biggest problem wasn't the tools but the language and communication. Designers and developers had no shared definition of “UI testing” and so this misalignment was the root of the friction we observed.

User Research
Interviews
To gather more information, my team and I planned user interviews where I led targeted interviews with designers, developers, and testers to audit our handoff and QA processes. Working closely with my team, I wrote the research script and facilitated the sessions while managing note-taking. We focused on the end-to-end workflow, specifically looking at tool usage, documentation habits, and the transition from design to code.

Quantitative Research
User Survey
To better understand how collaboration varies across different team sizes and roles within the development cycle, we conducted a Design and Development Collaboration Survey, receiving 38 responses. The goal was to explore how tools are used, what aspects of the workflow are working, and how handoff and UI testing are handled across roles. Participants ranged from designers to frontend developers and architects, providing a cross-functional perspective on the QA process.

Systems Thinking
Mapping Teams Workflows
We identified that UX product teams and the design system teams operate differently when it comes to QA, so we mapped the UI testing workflow of one representative product team and compared it with the Design System team. The process typically moved from design handoff to development, QA preparation, UI review, and finally feedback and iteration.

Synthesis
Pain Points & Opportunities
Three pain points showed up consistently across interviews and survey data. The biggest issue was manual review, designers were spending a substantial amount of time visually comparing designs to the final implementation, often under tight deadlines. This was made more difficult by the lack of a clear UI testing process, where responsibilities between design and development weren’t always well defined. On top of that, teams were working across a wide range of tools, which added even more friction to an already fragmented workflow.
Identifying these point points, opportunities emerged around design-to-code validation, centralized feedback systems, design-system-aware automation, and earlier cross-role collaboration. However, these opportunities would only be viable if workflow clarity improved first.

Key Insight
Good QA comes from good handoff documentation
Evaluation
Ranking Tools
We benchmarked tools including Chromatic, Vercel, Polypane, and BugHerd against criteria such as automation capabilities, design system awareness, integration with GitHub and Figma, enterprise readiness, and feedback management.
A key finding was that we did not uncover strong AI-native tools specifically built for Design QA and that most automation tools were engineering-focused rather than designer-centered.

Recommendations
Moving Forward
Based on our research, we also offered recommendations on improving team structure and workflow outside of introducing new tools. We recommended clarifying cross-functional roles, establishing a consistent QA workflow, and improving handoff documentation. Building on this, our team will continue exploring design-to-code validation, centralized feedback, and design-system-aware automation.

Reflection
This project reshaped how I think about AI in design. It reinforced that tools do not solve workflow problems on their own and I was able to see first hand the impact of communication and teamwork for the same goal. It also made me realize that it's important to uncover whether friction stems from technology gaps or from structural misalignment. Through this work, I strengthened my ability to translate qualitative friction into strategic recommendations and to navigate conversations across design and engineering stakeholders.