Enhancing User Experience in conversational GenAI: Tackling Polysemy

Enhancing User Experience in conversational GenAI: Tackling Polysemy

Enhancing User Experience in conversational GenAI: Tackling Polysemy

May 16, 2024

AI EXPERIMENT

May 16, 2024

AI EXPERIMENT

May 16, 2024

AI EXPERIMENT

Discover how integrating UX design principles into chat-based GenAI can overcome the challenges of polysemy—words with multiple meanings—ensuring clearer and more effective user interactions.

Introduction


My husband -Xavi- and I, both UX enthusiasts and emerging AI aficionados, have spent countless hours exploring and discussing about how user-centered design principles can enhance interactions with generative AI (GenAI), particularly in chat applications powered by Large Language Models (LLMs). One significant challenge we’ve encountered is polysemy—the presence of multiple meanings for a single word, which can lead to misunderstandings and inefficiencies.

We rolled up our sleeves to tackle this problem, and created a dynamic copilot that features intuitive and selectable contextualizers for the user to pick from only when a relevant risk of polysemy in the prompt is identified.

The Problem

In daily use, even sophisticated models like GPT-4O often default to the most common interpretation of a polysemic word, leading to biased and irrelevant responses. This typically results in unnecessary back-and-forth exchanges that can frustrate users. For example:

  • User Query: "What’s the best way to file?"

  • AI Misinterpretation: The AI provides information on filing documents, while the user was asking about filing nails.

  • Resulting Issue: The user needs to provide additional context to guide the AI, causing delays and potential frustration.



This misunderstanding leads to additional user input, slowing down the interaction and increasing frustration.


Another example: The one we are going to use for this little experiment

  • User Query: "What are the top 3 qualities of a PM?"

  • AI Misinterpretation: The AI assume I'm talking about "Product Manager" and directly prompt me back with the results,

  • while as I user, I was asking about "Project Manager"


The Opportunity

Addressing polysemy effectively can significantly streamline interactions by ensuring AI systems correctly understand user intent from the start. This not only improves the efficiency of the communication but also enhances user satisfaction.

The solution

To tackle these challenges, we added to the experience a light intermediary LLM designed to:

  1. Identify Polysemy: The system detects words or acronyms that could have multiple meanings.

  2. Evaluate Context: It assesses the context to determine if the ambiguity might affect the conversation's outcome.

  3. Offer Choices: When applicable, the AI presents a list of distinct contexts or meanings (e.g., 'Product Manager' vs. 'Project Manager') for the user to choose from directly in the interface.

This solution helps clarify the user’s intent before the main AI processes the response, significantly reducing misunderstandings.

The Results

Implementing this solution can led to several benefits:

  • Enhanced User Experience: Users spend less time typing and clarifying their needs, which reduces cognitive load and overall effort.

  • Decreased Bias: By allowing users to specify their intent, the system reduces the likelihood of defaulting to biased responses.

  • Cost Efficiency: The new system uses fewer computational resources by reducing the number of interactions needed to resolve a query.

Conclusion

Through our experiment, we’ve seen firsthand how integrating basic UX principles with AI technology can solve complex issues like polysemy, transforming frustrating interactions into seamless and productive experiences. This approach not only makes AI more user-friendly but also more economically efficient, demonstrating the profound impact of thoughtful user-centered-design on the future of AI.

Made with

in Barcelona |

Made with

in Barcelona

Made with

in Barcelona |

Lights on·off