We all need to tell stories in job interviews. The question might be a casual “How did you do X?“, or a more formal invitation of a story such as “Tell me a time you led a product innovation!“. But for many people, the challenge is that there are too many ways to tell stories, and they aren’t sure what is the best way to storytell during an interview (Should I tell it chronologically? Which point-of-view do I use? What details should I focus on vs. leave out?).
In this article I would like to share a framework that I developed specifically for storytelling in an interview setting, and it has personally helped me have some very enjoyable interview conversations. I call it the SCARL framework, which stands for Situation, Complication, Action, Results, Learnings.
Interview storytelling is about your skills and character
Storytelling in interviews is a purposive activity: We tell our stories to the interviewer not for fun, but to demonstrate how well we fit into a team.
Stories have the power to demonstrate our potential fit into a team because it showcases our skills, such as problem-solving skills and communication skills, which are hard to assess with technical/design questions.
Moreover, stories are also interviewers’ lenses into our character. Whether it’s ownership, team work or how we embrace challenges, in short interactions such as interviews, the best way to express who we are is through storytelling.
Now that we’ve established what we want to achieve in interviews with storytelling, we can examine why the SCARL framework can help us achieve them.
The SCARL framework is designed to demonstrate skills and character efficiently
Let’s review the 5 components of the SCARL framework one by one. As we look into each component, I will use it to make an example corresponding block of story. When we are done with all 5, these blocks will come together to form our example story.
The question I will answer as an example is:
Tell me a time when you made a product mistake.
1. Situation
In this first part of our SCARL framework, we tell the interviewer the context of our story. That is, we explain our business or product in plain terms, and set the context for the problem that we are about to discuss.
This step is important because it provides basic but important background information that the interviewer — who is most likely not familiar with whatever you previously worked on — doesn’t know. This helps remove cognitive burden (i.e. you are less likely to confuse/lose your audience) and ensure both parties have the same information before a deeper dive.
(For further discussion on reducing cognitive burden in written communication, see my previous article on writing.)
This Situation step is also important because it demonstrates our understanding of our own product/business, which is part of candidate character as we discussed before: if we can explain our product really well to someone who’s never worked on it, the interviewer knows we truly have a strong understanding of it.
The Situation part of our story may look like this:
A few years ago I was the product owner of a networking matchmaking app. The concept is like Tinder but for professional networking: we show you anonymized LinkedIn profiles and you swipe right if you want to connect. When two people mutually swipe right, they become a connection and can start chatting!
2. Complication
In this second part of the SCARL framework, we complete the context we set for the audience (interviewer) by explaining the problem, challenge or disruption — anything that derailed our normal development or operation process — that we faced.
We need to do this because the Complication will be the focus that we set for this story we are telling, since no story can exist without conflicts.
We also need to tell the Complication because, similar to the previous component (Situation), by explaining the problem clearly, we demonstrate that we have a good grasp on the problem, which helps the interviewer establish our problem analysis skills.
If you have previously read about the STAR framework — Situation, Task, Action, Result, you may be wondering if I substituted Complication for Task. Yes, the SCARL framework is partly based on the STAR framework. The reason I prefer Complication to Task is: In many scenarios, the Complication is clear but the Task may not be. Example: if you noticed a product you are running suddenly lost a lot of users (a Complication), is your Task to bring the previous users back? To acquire new users? Or to kill the product because it is no longer relevant in this time? How we deal with this ambiguity introduced by the Complication, even without a clear Task, goes a long way in showing our problem-solving skills.
(For more discussion on dealing with ambiguity, see my previous article on embracing ambiguity.)
Let’s continue our example, in which the Complication part of our networking app story may look like this:
A few months after we launched the app, we noticed that although our customer base kept growing, we kept hitting below target on one metric: connection messaging activity. That is, our customers were signing up, browsing profiles, swiping and making connections, but they were not talking to their connections!
3. Action
This third component is about what we did facing the Complication.
This can include how we analyzed the problem, what hypotheses we decided to test, how we designed/implemented new features, who did we reach out to for help and partnership, etc.
This is the centerpiece of our demonstration of skills, such as problem-solving skills and communication skills, and character, such as team work and how we handle challenges.
Example:
When I noticed this pattern, I worked with our telemetry team to generate portraits of users who did message their new connections in-app, and those who didn’t. It turned out that for most users, for the majority of new connections they never exchanged any message, but for some new connections they had very long conversations in the app.
Seeing that, my hypothesis was: people felt like only a small percentage of connections were “high-quality” connections, and they don’t want to bother messaging the rest.
So I designed a new feature where the app will automatically remove “redundant” connections for users, that is, if you swipe right and connect with someone, but don’t message each other in 30 days, we will un-connect you so as to automatically purge the connections that turned out to be uninteresting! We launched the new feature in a beta release.
4. Results
In this 4th component of the story, we reveal what were the results of our Action that we took facing the Complication.
This part is important because the Results of our Action also speak volumes of our problem-solving skills. After all, the interviewer would be curious: did we successfully solve the problem?
Continuing our example:
To my surprise, users pushed back strongly against this new feature! After a lot of critical feedback emails and 1-star reviews, I knew my hypothesis was wrong. But how? This time, instead of relying on quantitative telemetry data, I talked to a few users. What I learned changed my perspective completely.
It turned out, the biggest reason for a user to not in-app message a new connection, is because they immediately connect on LinkedIn and chat there instead! Only a handful of new connections chose to chat in our app directly, because it is seen as a “one-time matchmaking event”, not “the place to go on to the ‘first date'”. And when we purged what we thought were “uninteresting” connections for users, we didn’t know that they were in fact chatting over LinkedIn and we unknowingly made them offend each other by un-connecting!
Based on these learnings, we rolled back the feature, communicated with our beta test users what we learned, and announced that we would pursue other updates in the future to strengthen their connection experience. Besides, internally, I started a new initiative to work with different stakeholders to revisit our product mission: are we happy with being the networking matchmaker only? Or do we want to expand our product vision to become a social media platform?
5. Learning
In the 5th and final component, we discuss our Learnings such as why did our Action lead to the Results? What are the next steps to making our product stronger? Or what can we do differently in the long-term to improve the product? Sharing these learnings help demonstrate our problem analysis skills, ownership, and our ability to grow.
Let’s wrap up with our example:
From this experience, there were two pieces of learnings that I took away.
The first one was the importance of qualitative, “human” data in product insights. I learned so much about user psychology in just a few conversations that the quantitative telemetry data would have never revealed!
The other piece of learning was, as the product’s creators, we can often have confirmation bias of how our customers view or use our product, but customers’ intuition can lead them to use our product in ways we never imagined. We should not blindly assume.
Use the SCARL framework as Lego pieces, not a waterfall
I would like to wrap up this article with a note.
As you may have noted in my example story, because our problem-solving process can continue for more than one iteration, we sometimes may need to talk about more than one Action, Results, or Learning, etc. As we take a small Action, we may discover new Complications (which is one possible Result), reflect on Learnings, and take further Actions, so on and so forth.
This means, although many simple stories will fit into the most basic 1-pass Situation-Complication-Action-Results-Learning structure, it’s completely normal to have a more complicated story that doesn’t. For example, we may go: Situation-Complication-Action-Results-Learning-Action-Results-Action-Results-Learning. The SCARL framework is not meant to be a waterfall workflow. Instead, you can use any piece when it is logical to expand your story that way.
Let me drive this point home by going through the whole example story again:
Q: Tell me a time when you made a product mistake.
A:
(Situation)
A few years ago I was the product owner of a networking matchmaking app. The concept is like Tinder but for professional networking: we show you anonymized LinkedIn profiles and you swipe right if you want to connect. When two people mutually swipe right, they become a connection and can start chatting!
(Complication)
A few months after we launched the app, we noticed that although our customer base kept growing, we kept hitting below target on one metric: connection messaging activity. That is, our customers were signing up, browsing profiles, swiping and making connections, but they were not talking to their connections!
(Action)
When I noticed this pattern, I worked with our telemetry team to generate portraits of users who did message their new connections in-app, and those who didn’t.
(Results)
It turned out that for most users, for the majority of new connections they never exchanged any message, but for some new connections they had very long conversations in the app.
(Action)
Seeing that, my hypothesis was: people felt like only a small percentage of connections were “high-quality” connections, and they don’t want to bother messaging the rest.
So I designed a new feature where the app will automatically remove “redundant” connections for users, that is, if you swipe right and connect with someone, but don’t message each other in 30 days, we will un-connect you so as to automatically purge your connection list of connections that turned out to be uninteresting! We launched the new feature in a beta release.
(Results)
To my surprise, users pushed back strongly against this new feature! After a lot of critical feedback emails and 1-star reviews, I knew my hypothesis was wrong. But how?
(Action)
This time, instead of relying on quantitative telemetry data, I talked to a few users.
(Results/Learning)
What I learned changed my perspective completely.
It turned out, the biggest reason for a user to not in-app message a new connection, is because they immediately connect on LinkedIn and chat there instead! Only a handful of new connections chose to chat in our app directly, because it is seen as a “one-time matchmaking event”, not “the place to go on to the ‘first date'”. And when we purged what we thought were “uninteresting” connections for users, we didn’t know that they were in fact chatting over LinkedIn and we unknowingly made them offend each other by un-connecting!
(Action)
Based on these learnings, we rolled back the feature, communicated with our beta test users what we learned, and announced that we would pursue other updates in the future to strengthen their connection experience. Besides, internally, I started a new initiative to work with different stakeholders to revisit our product mission: are we happy with being the networking matchmaker only? Or do we want to expand our product vision to become a social media platform?
(Learning)
From this experience, there were two pieces of learnings that I took away.
The first one was the importance of qualitative, “human” data in product insights. I learned so much about user psychology in just a few conversations that the quantitative telemetry data would have never revealed!
The other piece of learning was, as the product’s creators, we can often have confirmation bias of how our customers view or use our product, but customers’ intuition can lead them to use our product in ways we never imagined.