UX Research for Growth Project

A series of research projects and internal workshops in the process of deploying persona development for KADA.

Role: UX Researcher/Designer

Before I started working on the KADA project, the KADA team had not previously deployed user persona research (the KADA project had been running for almost a year). The team occasionally received marketing analysis from the Product Development Department only when the team attended the department’s presentations. I noticed, however, that the KADA team had not realized the power and benefit of persona and user research. In addition, the presentation process had not been treated as an opportunity to enhance communication between team units and to “break” the silo for several reasons:

  • While market reports and analysis demonstrated ethnological information and users’ choices and decisions, it did not show the reasoning behind the data (the “Why”). Especially for the product team, based on this information alone, we could not efficiently define “What” our products and services could be.

  • The whole presentation did not separate the “market” from the “user.” There was a huge risk of paying too much attention to the market and “elastic users” without highlighting our core customer base.

  • I appreciated the essential work that the Product Development Department pulled off. Their conventional top-down presentation, however, was out-of-date and not ideal for business growth. When the presentation was over, each team unit would continue to follow its old path of working and thinking where there was no chance for them to look and think with a shared perspective.  


I talked to my leader and suggested that we start a UX research for growth project to clearly define our (design) persona. The project would also deploy workshops to keep the whole team on the same page while generating unpredictable ideas that team members could not possibly have come up with on their own — while remaining data-driven.

There were four stages during this research (I mainly focused on the Stage 1, 3 and 4):

Stage 1:
Analyze user data from our database in order to further define different user types and to extract key field information, such as userID, key behavior, etc. 

Stage 3: 
User interview: remote and on-the-spot. 

Stage 2:
Qualitative research: questionnaire and user data. 

Stage 4:
Workshop series.

Brain Dump

At the early stage of this project, the goal of the Brain Dump was to get information out of the stakeholders’ heads and onto the table. We hoped they would share any questions, assumptions, expectations, or hypotheses with us. This process was simple, fast, and dirty. Stickers and a whiteboard were all we needed. After meeting, we collected all the output and categorized it. Eventually, we emailed the final notes to stakeholders in order to keep everybody on the same page. 

Figure 1: Part of the Brain Dump collection

Figure 2: The final notes we emailed stakeholders that did not necessarily show much detail about what the interview process would be like, but gave everyone a hint about which direction we were going to take

Figure 2: The final notes we emailed stakeholders that did not necessarily show much detail about what the interview process would be like, but gave everyone a hint about which direction we were going to take


The goal of this user study was to better understand our user types and their characteristics. We tried to address the following research questions:

  1. Our users’ demographic information.

  2. What is our users’ habit, motivation and attitude on our platform?

  3. What does their general code learning routine look like?

Distribution channels

We set up several call to action entrances on our site and social media to reach out users to jump in. Includes the banner on the sites, system messages; our official students’ and educators’ WeChat and QQ group; and SMS. 


Eventually, we have collected 452 valid samples, includes 317 students, 71 public school teachers, 35 private school teachers, 16 parents and 13 others. 

User Interview

Based on the analysis of the quantitative session, we reached out to several of the participating users who we believed would be good fits for user interviews. Eventually, we recruited 10 student (between 3rd grade to 7th grade) and 3 educator participants. They reported using our platform an average of between 2-3 times a week and everyday. They also used a variety of features (Scratch/Python online IDE, social platform, and forum) on KADA in various combinations. 

A discovery: From the input we collected during the interview process, several interviewees mentioned that they did not use KADA (or their PC/laptop/iPad) every weekday. Meanwhile, many of them also looked forward to seeing more content available. But, why add more content when you do not have time to enjoy it?

After peeling back the layers, I found that many users thought spending 40-60 minutes—or even longer—on KADA outside of schoolwork was unrealistic. I assumed, based on this information, that the problem might not have been the content, but the way we delivered the content. I also went back to check their online content consumption details and found many of them were actively using 4399, TikTok, bilibili, and Volcano Short Video.

Following these discoveries, I thought we could try to create content that required less time to enjoy and learn. What if we had content made for 1-10 minutes of reading, listening, or watching? This way: 

  1. We could keep our users engaged by consuming their fragmentary time

  2. We could continue delivering engaging KADA creating and learning experiences to users

  3. It could help differentiate our brand and experience from competitors’

Document and Analyze Interview Results

Based on the video, audio recordings, and notes we collected during each interview, we started sorting out useful “moments" from this mass of information. We worked on each user’s “Journey Mapping” since we understood better how he/she interacted with our products and services. We also identified the emotional highs and lows during each user’s journey for future value-creating work. 


Figure 4: Conducting data sorting and categorizing to find patterns and high and low moments in user journey.


Then, we uncovered several themes to identify dimensions. We decided to use “studying focus” as the X axis and “open exploring” as the Y axis. Next, we created a matrix based on the two dimensions. We positioned each interviewee into one of the quadrants based on his/her performance in “Journey Mapping.” Subsequently, we invented 5 personas based on this matrix. 

IMG_3660 2.JPG
Screen Shot 2019-06-26 at 4.21.40 PM.png


Mind Mapping

Instead of holding another conventional presentation session, I suggested we run a mind mapping workshop in order to not only build a mutual understanding about our personas with stakeholders, but also to look for patterns and insights from the data with fresh perspectives.

The goal of the mind mapping was to engage every stakeholder and to create consensus about the desired qualities of the design we were going to create. Here is the outline of the workshop:

1st, Get Ready:

We printed each persona out as 5 posters to capture each persona’s key themes, trends, user journey, and other demographic information . 

2nd, Invite People:

We invited around 15 stakeholders to join our workshop. They were assigned to 5 groups. Everyone had a marker and two different colors of Post-it Notes. 

3rd, Get to Work:

Asked attendees to look through each poster individually, without talking to each other, and write down their thoughts that were sparked by studying the data from the posters. Attendees also wrote down what they thought was missing from the data by using different colored Post-it Notes. 

Next, everyone took their Post-its back to their groups to work together to find a significant theme and to discuss and discover as many insights as possible from this process. 

Eventually, each group turned its insights into design criteria which defined specific design qualities (listed below) that later would be compared across teams to look for “consensus” on the ideal qualities for the future design: 

  1. Creating a sense of belonging.

  2. Enhancing community-wide recognization.

  3. Creating a sense of peer-to-peer community.

  4. Creating a community based on fruitful learning resources.


Figure 5: Stakeholders were invited to the workshop (snack were provided :)) to learn Personas to share their thoughts and finish several tasks.


Assumption Testing

Based on the design criteria, each team presented several concepts and assumptions that might bring these design qualities to our users.

Figure 6: Main concepts and ideas categorized (after prioritizing) across teams.

Figure 7: Auxiliary concepts and ideas generated.

We asked ourselves several questions to test each and every concept and assumption:

Q1. Will our users be interested in it?
Q2. Will we be able to create and deliver it?
Q3. What is the scale of the project?
Q4. Is it unique from what our competitors are doing?
Q5: What kind of data do we need to test it?

Eventually, we decided to pursue the idea of a Trivia Game Event. The responses below answered the questions above and we concluded a Trivia Game Event would have been the best choice.

A1: It is fun (compete with peers on the leaderboard) and rewarding (gaining knowledge while winning rewards), but not taking too long to enjoy (2 mins).
A2: Absolutely, no problem.
A3: Keep it relatively small, based on web/H5 for now (not available in our app yet).
A4: It is unique, but can be copied by competitors.
A5: During this event, we expect the Weekly Retention Rate (the KADA Community, including the forum) would be no less than 25% and more than 5,000 attendees for this event.


The product designer and I created a prototype using EduOS (an internal online web tool made to support the Teaching and Learning Research Team, for example, generating test paper, etc.). We then invited several of our colleagues’ children to test the prototype. Based on the feedback, we made some adjustments, including the length of each session and the type of questions asked, which also included knowledge outside of programming for kids.

Figure 8: The user flow of the Trivia Game Event.


Figure 9-14: Six frames of the mockup of the Trivia Game.


After one month of online testing, monitoring, and running the game, the outcome was slightly better than we expected. During this event, the weekly retention rate of KADA Community was 26.43% and there were 5,762 attendees in total. In addition, 83.7% of them played the game more than once.

In order to release the MVP as soon as possible, our team had to compromise on some features and details to minimize the investment. As we expected, we received some suggestions and a few bug reports from users and stakeholders after launching.
The biggest complaint was that the interface did not fit small-screen devices very well.
In later iterations, we also improved UI design, added animated UI elements, enhanced UX copy design and CTA design, and more.