AI-Powered Area Summaries
Background:
Google Maps Platform has a vast amount of data about Places (businesses, landmarks, parks, etc.) that enterprise customers use to enrich their end-user’s experience.
For example, a hospitality company may use the Places API to display a list of restaurants in an area near their hotels along with user-submitted photos and reviews.
During foundational research, we learned that a specific segment of enterprise customers were often using Places data and Google Search to manually create their own summaries of Places (eg., what is this park like) and areas around specific Points of Interest (eg., what’s it like around this hotel).
We recognized an opportunity to use AI to help these customers more easily generate helpful area summaries with Places data.
Challenge:
With help from our foundational research, our design, engineering and UX writing team came up with 2 different concept mocks that included variations of:
AI generated outputs (ie., the content of the area summary)
UI (ie., how the content was displayed on screen)
Leadership wanted to launch a functional demo in the next 2 months in time for Google I/O, so we needed to quickly evaluate and iterate on these mocks.
We needed to evaluate how end-users in specific customer segment:
Understood and used different elements of the Area Summaries
Expected to interact with different UI elements included in the Area Summaries
Team:
To move as quickly as possible, my fellow UXR and I split the research into two distinct workflows:
End-User research: feedback from the end-users of our enterprise customers (ie., the actual users of this AI product) to define needs and expectations
Developer research: feedback from enterprise customers on how they would expect to implement and control AI product for their end-users to define how we build the backend components of the product
I led the end-to-end End-User research for this evaluative study. This included planning, data collection, analysis, and continued user advocacy and future research. I collaborated with my fellow UXR to combine our findings and recommendations into a single deck.
For this work, I was embedded in a core working group including:
PM
Lead Engineer
UX Writer
UX Designer
Methodology:
For the End-User research we decided to run 2 rounds of unmoderated early stage concept testing on User-Testing. This would allow us to A) capture attitudinal data quickly and B) leverage existing participant pool instead of using other laborious recruitment processes.
Participants:
We tested both concepts with 40 participants with the same criteria.
20 participants saw concept A (first and 20 participants saw concept B first
Analysis:
Bottom-up analysis focusing on key questions that targeted stakeholders biggest knowledge gaps / core hypotheses.
Outcome:
Recommended dynamic area size calculation depending on area density which increased the number of use cases (and therefore increased market size) Area Summaries could be relevant to.
Reduced development time by 2 weeks by demonstrating a clear preference and comprehension of the AI gen content in Concept A which allowed us offload any requirements that were Concept B specific
In reducing dev time, we gave our team more prep time for Google I/O which enabled more internal feedback and resulted in a hugely successful live demonstration in front of Google leadership and thousands of developers.
Feedback from these studies served as a blueprint to followup iterative studies. We continued to collect feedback in similar ways moving forward until our official launch.
What I Learned:
This was a fast paced project that reported up to director level leadership which meant I needed to carefully balance speed with polished rigor. I was used to communicating results in scrappy, on-the-fly ways for my working groups to meet their immediate needs but I hadn’t yet flexed the muscle of simultaneously doing quick executive summaries at the end of weekly sprints. I learned how to create two artifacts for 2 different audiences quickly and effectively.
If I were to do this project again, I would’ve shared actual footage of the tests themselves to my stakeholders sooner. I wasted time trying to communicate sentiment when letting them see it happen in real time would’ve been a more effective way of telling the story. I now try to let users tell the story when I am having trouble!