Curiosity Sparked: How AI Tutor is Igniting Wonder in Skill Development

 

Disclaimer:
The content shared is to highlight the passion and wonder of our guests. It is not professional advice. Please read our
evidence-based research to help you develop your unique understanding.

 

πŸ’• Story Overview

πŸ’• #MAGICademy S3E7, we had the great pleasure of meeting @Kristen DiCerbo, Chief Learning Officer @Khan Academy to discuss basic principles for designing AI-powered tutors to power up the skill development process for learners across age groups. She emphasizes the importance of curiosity and wonder in the learning process. She also highlights the need for a balance between AI tutors and human teachers, as human relationships and motivation are crucial for student success. 


Story Takeaways

  • Curiosity is a potent intrinsic motivator in learning, surpassing external rewards. It drives deeper engagement, broader exploration, and better retention. Curious learners ask more questions, make connections across subjects, and think more critically, making learning an exciting journey rather than a chore.

  • The human element in education is irreplaceable. Human teachers or coaches excel at providing emotional support, adapting to individual needs, and serving as inspirational role models. While AI tutors/coaches can provide targeted practice and feedback, it is difficult for them to replicate the nuanced understanding of human emotions or form meaningful relationships crucial for skill development success. 

  • Redefining accessibility in the context of artificial intelligence (AI) involves shifting the focus from merely accommodating disabilities to meeting each learner where they are, particularly during challenging moments in their learning journey. By leveraging AI technologies, educators and developers can potentially design systems that identify and respond to learners' specific challenges in real time, providing tailored support and resources to facilitate understanding and engagement. 

  • Kristen’s MAGIC: Translating and implementing research into product design to develop skills.

 
  • 00:00 Introduction to Dr. Kristen DiCerbo and Khanmigo

    02:06 The role of AI in guiding learner  practice

    04:59 The importance of a healthy dose of productive struggle

    06:21 Identify the strength of a human tutor 

    07:57 Engineer the prompt to leverage existing large language models 

    11:31 The collaboration between AI and human instructors

    13:10 Activate a childlike sense of wonder to enhance skill development

    17:50 The future of AI in skill development

    22:15 Practical strategies to mitigate AI hallucination

    24:27 Interesting thought leaders and builders to follow

    26:57 Dr. Kristen DiCerbo's Magic: translating research into learning products

  • Human in the loop collaborating with AI tutor:https://youtube.com/shorts/o1SiHxAPTZc

    The importance of maintaining the sense of wonder while learning: https://youtu.be/IG35GjvkvMo

    True accessibility means every learner has every opportunity to develop skills: https://youtube.com/shorts/OZN_REWWlqg 

    Learning Strategy 1: Activating prior knowledge and then link to the new knowledge: https://youtu.be/3FxB_Xysy1Q 

    Learning Strategy 2: Practice retrieving or self-knowledge check to enhance learning: https://youtu.be/6EZvOBt7c0c 

    Learning Strategy 3: Explain to yourself first as we learn: https://youtube.com/shorts/xt3n4VG9Bko

    Leverage AI's strength rather than weakness to reduce hallucination:  https://youtu.be/yO-tr9Fc_fw

    Who were your imaginary friends when you were a child?

    https://youtu.be/dZ_1_8B1Dag

    • Dong, C. (2023). How to Build an AI Tutor that Can Adapt to Any Course and Provide Accurate Answers Using Large Language Model and Retrieval-Augmented Generation. ArXiv, abs/2311.17696.

    • Baillifard, A., Gabella, M., Lavenex, P.B., & Martarelli, C.S. (2023). Implementing Learning Principles with a Personal AI Tutor: A Case Study. ArXiv, abs/2309.13060.

    • Horstmeyer, A. (2020). The generative role of curiosity in soft skills development for contemporary VUCA environments. Journal of Organizational Change Management, 33, 737-751.

  • Dr. Kristen DiCerbo is the Chief Learning Officer at Khan Academy, where she leads the content, design, product management, and community support teams. Dr. DiCerbo’s career has focused on embedding insights from education research into digital learning experiences. Prior to her role at Khan Academy, she was vice president of Learning Research and Design at Pearson, served as a research scientist supporting the Cisco Networking Academies, and worked as a school psychologist. Kristen has a Ph.D. in Educational Psychology from Arizona State University.

    https://www.linkedin.com/in/kristen-eignor-dicerbo-414138b/

    https://www.kristendicerbo.com/home

    https://www.khanmigo.ai/

  • Note: This transcript may include minor erros.

    Jiani (00:00)

    Welcome to MAGICademy podcast. Today with us is Dr. Kristin, the chief learning officer for Khan Academy. And she has been devoting her life trying to translate and integrate educational research into the design of digitized and personalized and scalable and impactful and engaging digital experiences for learners worldwide.

    So, so great to have you Dr. Kristen and so happy to have you today.

    Kristen DiCerbo (00:30)

    Hello, thanks for having me.

    Jiani (00:31)

    Wonderful. So as always, how would you introduce yourself in a way that you would like?

    Kristen DiCerbo (00:41)

    Great. I always say, as I introduced myself, as you said, I really spent most of my career thinking about how we can help more learners learn more. And a lot of that came, one of my early experiences was when I was in high school, I tutored middle school students and really had just such an interesting aha moment about

    how different students struggle with different things as they're learning. And started me thinking about how can we help students succeed, how can we help all students succeed. And that's taken me out in a bit of a winding path, but mostly in the field of thinking about educational technology and how we can use that to help scale what good learning and good teaching looks like.

    Jiani (01:28)

    Yeah, that's great. Yeah, because your current focus is on like developing content, product management and community support and building and leadership for Khan Academy. And recently you've been leading the team to develop, build and implement Khanmigo, which is the AI powered tutor for Khan Academy, which is super exciting. There's like a lot of good news, big news, impactful news being shared. So we're curious of why Khanmigo like what was the story behind this AI tutor.

    Kristen DiCerbo (02:06)

    Yeah, so the launch of Khanmigo was in March of 2023 coincided with the launch of what we call GPT -4, which is a large language model. And when we were lucky enough to be introduced to that model in August of 2022. So if you remember, that was even before ChatGPT launched. So we got a preview of what that model was going to look like.

    And for any new technology that we get, we always start with what are the learning problems that we're trying to solve? Not what is this fancy new technology and what can it do? But being clear about what the learning problems we're trying to solve are. And so we know, for example, that students in the United States and in many places are not making as much progress, particularly in math, as we would like them to. And that then for us,

    our solution to that has been to be able to find ways that students can engage in the kinds of practice with immediate feedback that we know they need to be successful at learning those new skills. What we saw the AI being able to do was being able to guide students in that practice. And so for us, as many of your, the listeners have probably experienced, you're sitting in a classroom with 30 students.

    Maybe the teacher's helping another student. Maybe they're working with a small group of students. You're trying to figure out this question. You can't figure out how to do that. You're just stuck. And so you're not practicing. You're not really making any progress. And the same thing is true at home. You're trying to work on your homework and you're stuck and you can't quite figure out how to get unstuck. We pretty early on said, what if this AI is able to have a conversation with you?

    that helps you get unstuck and helps you move forward in doing that kind of practice that you need to be successful. So that was the early days and we then built out the tutor for math and now we have an essay feedback tool that also is able to give students feedback on their essays and a number of other things along with a bunch of tools that are assistance for teachers.

    So assistance in writing lesson plans and assistance in analyzing their students' data, those kinds of things. So it's certainly expanded since we launched, but the early days we were thinking about how can we help students practice more and get unstuck when they're working on the new skills that they're learning.

    Jiani (04:37)

    I love that because I totally agree with you because a lot of times in terms of like learning efficiency is what we call like the modest point. Like sometimes we struggle on that and then seems like nothing worked and we're trapped in this like internal conflict of dialogue and sometimes intros are not being helpful. So it's okay. Let's just forget about it. And then the, then that goes the opportunity for learning and development and being able to have a conversation with a AI powered agent can potentially at least help us to overcome that initial, blank spots or the initial kind of help us to at least move a step forward.

    Kristen DiCerbo (05:27)

    Yeah, there's definitely some people who suggest that a little struggle is good with learning. And that is probably true. They, you know, it's sometimes when you're trying to get it and trying to get it and then you finally get it, it's like, yes, it feels like a big celebration. But there's also pretty easily that can slip into being really frustrated and just giving up. And so you've got to figure out what that line is between giving a little bit of support to help students not feel so frustrated that they're done.

    Kristen DiCerbo (05:55)

    letting them kind of figure things out and experiencing that celebration of getting to the answer themselves as well.

    Jiani (06:01)

    Yeah. And I think it really depends on their self -efficacy and how they tend to interpret struggles as whether like a motivator or a prohibitor.

    what are some of your overarching strategies when it comes to designing Khanmigo?

    Kristen DiCerbo (06:21)

    A lot of what we do is think about what we know from research on how people learn. So for example, when we started building a tutor, we looked into the research on what are the things that good human tutors do. So when you think what that research says is there's a lot of tutor moves that that tutors make. So they might at one point summarize what the students said. And another point they might ask a probing question.

    Jiani (06:22)

    It's prompt engineering.

    Kristen DiCerbo (06:49)

    At another point, they may give a hint. So at different times, the tutor is going to make different moves in what that is. So we know that those are the kinds of things that improve student learning. So we want to do those same kinds of tutor moves. And that's how we start building in. What do we know about Khanmigo and how it acts and how it responds to things? In the same way, when we were starting to design the lesson planning feature for teachers, there's a whole bunch of rubrics out there about what is a good lesson plan. So what are the things a good lesson plan should do? So we took that rubric and then as we started teaching Khanmigo how to write lesson plans, we graded Khanmigo's lesson plans based on those rubrics. Like, it's not very good yet at this thing, but it's starting to be good at giving student examples, but it's not so good yet at those.

    early warmup activities that need to spark curiosity and activate students' prior learning. So again, bringing in the research about what the different parts of the lesson plan do, and then making sure that Connigo is doing those things.

    Jiani (07:57)

    I love, I love that. It's like kind of, training the, the, the chat GPT or training the model using your rubric at first. And then based on the training.

    Kristen DiCerbo (08:09)

    I want to, it's not training the model. So I want to be clear. Training the model in AI has a specific meaning where you feed data into the model to get it to act in this particular way. What we're doing is prompt engineering. So the prompt is the instructions you give the model. So the analogy is like, if you have an assistant, it's how you tell the assistant what to do. It's the instructions you give the assistant, hey, first do this, then do this, that kind of thing.

    Kristen DiCerbo (08:39)

    So just want to be clear that as we think about artificial intelligence, there's different ways to get it to do what you want. But these are large language models that have trillions of parameters. You can use them as is and then control them with the instructions you give. And so that's what we've been doing.

    Jiani (08:59)

    I see. So based on the rubric, then we come back to adjust the prompt. Maybe at the beginning, specify and saying, please try your best to activate learner's curiosity. Here are a few examples. So further kind of fine tuning the prompts. I get it. I get it. That's great.

    Kristen DiCerbo (09:20)

    That's right. So we give the prompt, look at the results. Is it doing what we want it to or not? If not, we go back to the prompt and rewrite the prompts. That's right.

    Jiani (09:29)

    That's great. And once you revise the prompt, has Khanmigo been able to perform consistently as the model, the language model, the database start to kind of change and evolve?

    Kristen DiCerbo (09:43)

    So one of the things that we have at Khan Academy in our relationship is with OpenAI is what's called a dedicated instance. And that means we have our own version of the model and all of the traffic from Khan Academy hits that version. So that version doesn't change unless we say we'd like to use a new version. So that means that we control when the model changes and before we change it,

    We run through a whole bunch of tests. Like we just actually made the switch. This is getting a little technical, but we just made the switch from GPT -4 to GPT -4 Turbo. And before we did that, we tested all of our prompts and revised them as needed so they would act the same way with the new model. And then we said, okay, now, swap out our model, OpenAI, from GPT -4 to GPT -4 Turbo. And now we're ready to do that.

    Kristen DiCerbo (10:40)

    So that's a little bit of the difference between when you are an organization that's able to build on top of these models, you can, and you're kind of in that relationship with the organization that owns the model.

    Jiani (10:54)

    That's great. That actually gives you kind of space to test the pilot before it's ready. So it's kind of additional layer of protection. That's wonderful. And you've mentioned in our initial conversation, we were talking about, you know, could AI potentially replace human beings? I was like, no, no, no, no, no. It's like human in a loop kind of model. Can you help us to understand a little bit more? How can human tutor or instructor or trainer work in conjunction to maximize the joy of learning and the potential of the learner.

    Kristen DiCerbo (11:31)

    One of the things that we get asked about a lot is whether this is going to replace teachers or how well does this compare to a human tutor? I think the better comparison is how much improvement of this is over, how much improvement does this offer over nothing? Because most students don't have a human tutor. Most students have nothing or they have Google. And how the idea is that we want to get the AI tutor to be able to do some of the fundamental activities of learning that human tutors do. But we know that human tutors and human teachers also build relationships with students and the AI is not going to do that. And we don't try to get to be the student's buddy or their friend or to form some kind of a emotional relationship with them. We don't think that's appropriate But on the other hand, we know that

    When students are in schools where they feel that there's an adult who cares about them, we see better graduation rates, better post -secondary attendance, better school attendance generally. So we know that those human relationships are important. We're not trying to replicate that with AI. What we're trying to replicate with AI are the learning moves and some of the foundational things where students can get the practice they need and build their skills.

    knowing that there's still going to need to be humans who are encouraging them, motivating them, who understand their long -term goals and are helping them proceed towards those. Like all of those are still very much human skills that we want to make sure that there are still humans doing.

    Jiani (13:10)

    I'm curious, what role do you think childlike wonder or curiosity play in the learning process?

    Kristen DiCerbo (13:18)

    So one of the things that I really liked about Sal, so Sal Khan is of Khan Academy, he started Khan Academy by making these videos for his cousins who he was tutoring and he put them on YouTube and the rest is history. But one of the things I loved about his videos before I knew him, before I came to work at Khan Academy,

    was this, the way he just had that sense of wonder about, my gosh, can you believe that the world works this way? Can you believe that math works this way? If I'll do this and this, I come out, my God, like creating that sense of wonder to me is really, I know, I just, I love how he does it, but I also do understand that that's an important part of.

    Kristen DiCerbo (14:03)

    learning and thinking about one of the things that can drive someone to learn something is because they have a question about it. They are interested in understanding how it works. And I think we tend to lose that sometimes in school and we start to just, I'm learning this because the teacher said I had to learn it and you kind of like start moving through. And I think we lose a lot of students' motivation that way. So I do think it's important.

    Kristen DiCerbo (14:31)

    to help teachers and to help students start to engage that sense of wonder of how the world works. And so there's some of this in math, there's a lot of this in science, but there's also thinking about understanding how history relates to today's world. And so I think in any domain, you can start to have that. we actually literally have an activity called Spark Your Curiosity.

    where you can give it a topic or a subject and it will give you 10 things or five things you might not have known about that subject before and ask you if you'd like to talk about any of those and dig further. So yeah, I think it's really important for us to continue to think about how we can spark people's sense of curiosity and wonder.

    Jiani (15:15)

    Yeah, that's amazing.

    That's great. Are there other strategies that you've implemented into like, as you kind of design them to like peak curiosity? Are there any other strategies that you've?

    Kristen DiCerbo (15:28)

    Well, so each part of a lesson in when we think about education actually has a pedagogical purpose. And so when you think about the, that's kind of the warmup that we call it, which is, as I said, designed to spark curiosity and also activate your prior knowledge. Because if you're thinking about what you already know about this subject, then you can link it to your link, the new knowledge to the existing knowledge in your brain. You can imagine like the neurons connecting.

    That's kind of what happens there. There's other techniques that have been shown in research over and over to help students learn. One is self -explanation. So when you are thinking about learning something new, like if you're watching a video or reading it in a book, repeat it to yourself in your own words. And again, that helps you start to think about in how it relates to what you know and how, you know, builds your own understanding of what's going on.

    the third is we call retrieval practice, It's quizzing yourself because you can say you just, you know, read over what you're studying for a test and you just read over what you've highlighted. Well, you're not actually practicing retrieving something from your memory. And one of the things we have to do is not just put stuff in our memory. We have to get it out of our memory.

    Jiani (16:47)

    Yes.

    Kristen DiCerbo (16:47)

    to be able to talk about it and work with it. So practice retrieving things is basically quizzing yourself. So if you're studying for a test instead of reading and reading and rereading, it's better to close your book and quiz yourself and practice retrieving the information as opposed to just kind of keep putting it, keep trying to cram it into your brain, practice getting it out of your brain too.

    Jiani (17:10)

    I love that. It's like a breathing in and breathing out for your brain with the retrieval. Amazing, amazing. Then I'm curious, like if we kind of look into the future, what would be the best future that you can envision with the fast advancement of technologies, not only with artificial intelligence, but with like virtual realities?

    Kristen DiCerbo (17:14)

    Yes, yes, yeah.

    Jiani (17:35)

    web3 and brand machine interactions like Neuralinks and all that interesting, amazing and futuristic technologies and how, yeah, what would the future look like, could look like?

    Kristen DiCerbo (17:49)

    So I hesitate to don't take this as a prediction because I never would have predicted, you know, two years ago that we'd even be where we are now. And I think some technologies have a lot of promise and some are probably going to get pruned away. But the idea really is not about the technology. The idea is about students being in learning environments where they are able to have the opportunities to learn and learn up to their potential.

    And whatever technologies can help do that for all students, but particularly students who in the past have not had those opportunities or have not been able to access that kind of learning. I think that's the vision that we're trying to build for the future. And so yeah, it's that more students learning more.

    Jiani (18:36)

    And sometimes that accessibility is not that visible and that resonates back to the example you started is in the classroom, some students may struggle and they may not say anything and there's nobody knows about except themselves. And if they decide to let it go, then they will really pass on that learning opportunity. So it's really hard to see sometimes.

    If technology can really help us to visualize and catch or meet learners where they are 24/7 or wherever they want it to learn, that will be a fantastic place to be. Any particular challenges, risks that we need to be aware of? So many.

    Kristen DiCerbo (19:18)

    Yeah, yeah.

    there's so many risks. So we actually created as part of our responsible AI development, a framework of how to think about mitigating risks. So we create, we started off with a couple of different existing frameworks that are out there in the world to start thinking about risks. But some of our biggest risks as we were launching were

    accuracy. So these AI models are not 100 % accurate. In fact, they do what the AI folks call hallucinate, which is it can make things up. And so there's both things we can do on our side to reduce the amount that it hallucinates, but it's not doesn't eliminate it. And there's things we can do by increasing the AI literacy of students and teachers so they understand it could hallucinate and know that they should be

    watching out for that and checking that and not thinking everything that the AI says is 100 % true. So that's a big risk now and into the future of thinking about are these models accurate? So you can imagine, I was talking about them in the tutoring context, but you can imagine they can also be inaccurate in ways that spread misinformation in our society and think about how we can ensure that

    what we're reading is actually the truth or the video that we're watching is actually that person saying that thing. I think all of that is that that falls into this big bucket of the risk around accuracy of where things are. Another risk that we thought about a lot is just people who are using it for nefarious purposes, purposes we don't want to use for.

    the.

    as a societal problem, that's another big risk that relates to the misinformation is how are people who don't have good intentions using these tools for not for good, but for their own purposes, whatever those may be. So I think that's another whole section of risk. And then there's a whole section of risk around bias and equality and thinking about are these...

    Is this new technology only getting in the hands of students who already have all the advantages? How do we get it in the hands of students who are historically under resourced? How do we get them the experiences and working with it? And how do we eliminate and start to reduce bias in the technology itself and how all of those work? So those are just three risks. We can keep going down the list, but there's lots of things that we need to just be careful and cautious of with all of this new technology.

    Jiani (21:54)

    I like how you put like hallucination and AI literacy as top two. In terms of like mitigate the hallucination, what are some like practical strategies that you would recommend people to try?

    Kristen DiCerbo (22:15)

    Well, so there's two ways that we've approached it. So one is for us, we've built it on top of our existing content where we know the answers. So if when students are working on a math problem on Khan Academy and are talking to Khanmigo, we feed the problem and the answer to that problem because we know the answer. So that helps them.

    reduce the amounts of things that it's making up and getting the answer wrong. So there's that piece. There's also though, in the end, we have ended up when it's actually doing math, having it feed out into a calculator and sending that calculator answer back as opposed to trying to get the language model to do math. So the point there that can generalize is don't ask it to do things that it's not going to be good at. Like right now, doing math.

    Use a calculator. We have a tool for doing that. That's a calculator. And if you're just trying to get a math answer, probably better off using that right now than a large language model. But there's other things. The more you get into kinds of niche questions and less known problems, the more likely it is to kind of make something up. We also know it's not good at citations. So if you're trying to cite a source,

    just the way these models work, it's not going to be good at it. So one of the things you can do to reduce hallucinations generally is just not ask it to do things that you know these models aren't that good at and where they are. And then the second is to try to use applications that are specifically designed for the purpose you want. So if you're interested in tutoring, think about an education application that has done work to reduce the hallucinations as opposed to kind of...

    going to chat GPT for everything, for example.

    Jiani (24:05)

    I love that. I love how you tied the solution actually to the AI literacy as well, because we need to know the background to make the decision. Perfect. Wow. And are there any other startups or thought leaders that you've been following and you would recommend our audience to follow and keep learning as well?

    Kristen DiCerbo (24:13)

    Right? Right. Exactly. Exactly.

    So when I'm asked about this, I know folks are often asking about who in the AI world they should follow, but I think that if you're interested in the stuff that I'm doing, you should build your foundation in learning science and you should think about understanding how people learn as opposed to being too much about, my gosh, how's the AI working and where all that is. So my first recommendations are two people.

    who have written books and many articles and blog posts about how people learn. One is Daniel Willingham and one's Barbara Oakley. And they both have very accessible writing, like it's easy to read, but they're translating research about how people learn. So that's my first recommendation. If you're really interested in thinking about learning and education and that, there's a professor, excuse me, Ethan Malik, who is someone who's great to follow on the social media, who is always on top of the latest kind of in AI, but thinking about how it applies to education.

    I would be remiss if I did not also say, Sal Khan has published a book called Brave New Words that is about AI and education.

    Jiani (25:31)

    Wonderful, wonderful. Thank you for sharing all those great thought leaders and such a great conversation. And before we move into the magic piece of the talk, let's give a recap. So we've talked about Kristen's background, her passion, her commitment.

    And we also talked about the role of curiosity, childlike wonder and how that serves as a strong, powerful activator for any person who's going through or starting a learning journey. And we also talked about the future of AI powered world where true accessibility is not just on the surface. It's also hidden and trying to meet every learners where they are as they need it so everybody is able to learn and won't be denied of a learning opportunity. And we also talked about some people you need to follow and some potential risks and guardrails we need to think about. AI literacy is definitely number one and the ways to kind of manage the hallucination is also an important aspect to consider.

    Now let's move to the magical piece of the conversation. So Dr. Kristen, when you were 11 years old, what did you enjoy doing so much that time disappeared?

    Kristen DiCerbo (26:57)

    When I was 11, I was definitely a writer of stories. Loved to make up fantastical stories. I've had animals in them. I've always kind of been an animal person. But that was definitely the case of just, I could fill pages and pages of a notebook just writing out these kind of tales that were spinning out of my head.

    Jiani (27:09)

    Yeah.

    What was one story? Are you open to share with me a mini story? One that you've created.

    Kristen DiCerbo (27:29)

    I couldn't even tell you. These are not stories that lasted into my adulthood. It was just my 11 -year -old self expressing where things were, but I couldn't even pull out, like, this is what the story was about or any of that.

    Jiani (27:37)

    So it's like a mixture, like merges and yeah, I will be curious to maybe one day in the future.

    Kristen DiCerbo (27:52)

    would have, yeah, I don't know if any of them even, you know, I don't think they even exist anymore. You know, which is an interesting thing too. When I think back to when I was 11, everything was on paper and could be pretty ephemeral. And now everything is recorded, digital. I think that's a good thing and a bad thing for folks. I don't know that the world needs to see my 11 year old stories.

    Jiani (28:09)

    I would I would I recently watched a movie. It's called if

    Kristen DiCerbo (28:18)

    I saw if too. Yes. Yes.

    Jiani (28:24)

    Imaginary friend. Yeah, I think everybody needs a little bit of that magic.

    Kristen DiCerbo (28:30)

    So did you have imaginary friends?

    Jiani (28:32)

    I do.

    Kristen DiCerbo (28:33)

    I did when I was little, I had two. Their names were Kobe and Teddy when I was little. Yeah. Kobe and Teddy. I think they were human. I think they were human. I know in the if movie, these imaginary friends take all different forms, but now mine were humans.

    Jiani (28:38)

    Coby and Teddy, are they bears? Little furry bears?

    Mine is a big brother. Yeah, because I feel like, because I'm the only kid in my family and due to the policy and everything. And having a big brother makes my life feel complete and fun and happy.

    Kristen DiCerbo (28:52)

    Nice! Yes! Yes!

    Yes.

    So that's pretty common. So when I had mine, I mostly know about this because my mom has told me, I mostly had mine before my sister was born. And then when my sister was born, I didn't really play with them anymore or talk about them. So I think it's pretty common for them to be kind of substitute siblings. Yeah, that makes sense.

    Jiani (29:25)

    That's beautiful. And what do you think overall is your magic?

    Kristen DiCerbo (29:30)

    I think this is so this doesn't sound magical, but I think it is kind of the thing that is kind of that is pretty unique about me is my ability to translate what like scientific academic research into things that people can build and make. And that's like that ability to do that I have found is more rare than I would expect and is what really has led me on my path for my whole career.

    Jiani (29:58)

    Yeah, it is kind of hard because as I was reading research papers, I was like, it's hard. Like it's really hard to understand sometimes, even though like being trained in this discipline and being able to not only translate, but also translate within the context of like building and designing the product really takes some magic, true magic.

    Kristen DiCerbo (30:33)

    Yeah, yeah. So that's my skill.

    Jiani (30:35)

    Yeah, and now you think about it like, yeah, and then that makes all the research have meaning because that's the highest calling of all the research is to be able to find someone to...

    Kristen DiCerbo (30:42)

    Right, yes, yes.

    I, I at one point, yes, I at one point was kind of at a fork in my career of deciding whether I was going to go into the university and be a professor or if I was going to go into industry and build things. And, I decided that for me going into and doing that translation and building things that got outused by millions of students was what I wanted to do.

    but that I could still provide that bridge to all those folks who took the path into university and are doing a lot of the foundational research too.

    Jiani (31:24)

    That's amazing. So good. So good to have met you and so good to have you sharing your insights, your design wisdom, your research translation wisdom and your vision into the future and your deep love of product design to help people in a much, much scalable and accessible way and making all the research meaningful and impactful.

    Kristen DiCerbo (31:52)

    Well, thank you so much for having me. I'm so excited to talk to you and to have your listeners hear our conversation.

    Jiani (31:58)

    Wonderful. Kristin, all the best. And for folks who are interested to learn more about her work, and learn more, we welcome you to get connected. Her information and contact is in the show notes below. So please connect, please create new stories, please create more helpful technologies and programs to help more people across this world because we all need it.

    Kristen DiCerbo (32:23)

    Thank you so much, take care.

    Jiani (32:23)

    Wonderful. Thank you.

Previous
Previous

Power of Attention: Neuroscience for Transformation

Next
Next

AI Learning Companions: Enhancing Complex Skill Development