Situation


I was brought in by the Digital Customer Experience team (DCE) at JPMorgan Chase to support MLIO. The MLIO product space owned the Machine Learning and Artificial Intelligence housing for JPMC. This team was comprised of 8 scrum teams working in an agile context housed under 1 product owner which had an area product owner on each pillar.

Through a previous discovery wall and opportunity solution tree process, it was identified that we didn’t know much about how our guidance ML/AI models performed within the current pilot ecosystem. There was a 200 call specialist pilot population currently testing a prototype native application with provided real time curated guidance.

Design Team:

Me (UX Research Lead), UX Design Lead, Senior UX Researcher

Task

Create a study which answers the following goals:

Primary Goals:

  • Identify how current state dispute, fraud, and transaction inquiry experience is going.

  • Identify future problem statements and frame novel problems to be solved.

  • If agents could add one feature, what would it be?

Additional Goals

  • Have data to continue to drive the creation of agent personas.

  • How much is too much information to display on the prototype?

  • What is more important: More specific information or more quantity of information?

  • Is having an average agent evee 600 x 600 resolution due to multiple applications going to impact intents preferences?

Non Goals:

  • We don’t want this to be an only “the prototype” study. We want agents to feel empowered to bring up any kind of solutions and not just prototype improvement.

Action

I created a multivariate study with 12 participants. The participant attributes included a mix of call volume metrics, tenure, performance, on/off shore agents, and mixed ability. The study used a prototype which included the following:

  1. Initial Interview

    • Asked casual questions, got to know specialists, and learned about their working conditions.

  2. Ran Specialists Through Call Scenarios (Wizard of OZ Study with Discrete Trials)

    • Specialists were specifically paired to answer specific call scenarios using real recorded calls with a Figma prototype I controlled remotely.

  3. Participatory Design Opportunity

    • Specialists were asked if they could add one feature to their work experience.

  4. AB Test (Questions, Scales, and Embedded Discrete Trials)

    • Two examples of the prototype were introduced and we asked which agents preferred as well as what agents thought different features did or meant.

  5. Final Interview

    • Final questions, thoughts, and a formal opportunity for the team to be involved/ask questions/follow up with specialists.

Results

Example Tactical:

  • 9/12 Agents didn’t know what “Call Intent” meant. 5/12 Agents suggested it could be renamed “Call Reason”. Call Intent was renamed Call Reason.

  • Native app experienced wrapping when condensed. Problem was identified and solutioned during study in collaboration with the Design Lead.

Example Strategic:

  • 11/12 Participants didn’t know to look at the prototype for guidance unless they specifically decided to search for guidance. It was identified and then recommended in the participatory design phase that guidance should appear directly in the core workflow as opposed to a stand alone native application.

  • Training models takes time and can’t be done in natural work contexts. Systems like AHT requirements have to be removed from pilot populations.

REFLECTION

 

What Did I Learn?

  • It’s really challenging to run studies at an enterprise bank where tools are limited. I ended up having to do a lot of work arounds because apps didn’t work how I expected. For example, due to the encrypted nature of our pilot populations computers, zoom didn’t work. I had to run the study via skype screen share with zoom connected through participants phones.

  • Creating cadence participation emails for studies works great. I tried a template out this study to give cross functional stakeholders additional notice for when the study was going live and opportunities to participate. It landed well and there was lots of participation.

     

 WHAT COULD I HAVE DONE DIFFERENTLY?

  • Recruiting ended up being harder then I thought due to service requirements I was unaware of. The agents were apparently linked with a contract that required a certain amount of employee coverage to fulfill which made reserving timeslots very difficult. In hindsight, I wish I had known this because we thought we had full control over the bandwidth of our pilot population. If I had known this in advance, I would of advocated for more reliable participant access and scheduled a longer period of time for the recruitment process.