Reality Remixed: The VR + AI Frontier for Radical Skill Development
📑 Chapters
00:00 Introduction to Dr. Chris Dede
02:06 Importance of Immersive Technologies & AI
06:57 Exploring Virtual Environments for Learning
10:50 Case Study: EcoMod & Student Engagement
16:16 Balancing AI Limitations with Human Presence
23:16 Role of AI Avatars in Immersive Environments
27:29 What are the Potential Concerns of AI
36:39 The Role of Childlike Wonder
39:28 Dr. Chris Dede's Magic
Disclaimer:
The content shared is to highlight the passion and wonder of our guests. It is not professional advice. Please read our evidence-based research to help you develop your unique understandingAI technologies have been utilized to assist in creating content derived from genuine conversations. All generated material undergoes thorough human review to ensure accuracy, relevance, and quality..
💕 Story Overview
In S4E02 of @MAGICademy Podcast, Dr. Chris Dede discusses the transformative potential of immersive technologies and artificial intelligence in human development.
He explores how we can create engaging learning environments to enhance skill development and foster a deeper understanding of complex subjects.
Dede argues that by combining the strengths of human learners with AI capabilities, we can achieve a new level of effectiveness that was previously unattainable.
Key Takeaways
Intelligence Augmentation: The concept of intelligence augmentation (IA), where human capabilities are enhanced through collaboration with AI allows for a more effective learning experience, combining the strengths of both humans and machines.
Immersive Learning: Immersive technologies like virtual reality can transform traditional classrooms into engaging spaces that foster deeper understanding and skill development.
AI Avatars: AI avatars can facilitate rich interactions and assessments within immersive environments, making learning experiences more authentic and engaging.
Empathy Development: Immersive experiences can serve as an "empathy engine," allowing students to step into different roles and understand various viewpoints, ultimately fostering greater empathy and perspective-taking.
Scalable Training: The scalability of training in virtual environments, where learners can practice complex skills in a simulated context, making learning more accessible and affordable compared to traditional methods.
⭐ What’s Chris’s Magic?
Chris's magic lies in his ability to harness immersive technologies and artificial intelligence to create transformative learning experiences that empower learners to explore, understand, and engage innovatively with complex concepts.
Conclusion
In a world increasingly defined by technology, the lesson from this episode with Dr. Chris Dede is to embrace the power of intelligence augmentation and immersive environments as tools for personal and professional growth.
By valuing diverse perspectives and leveraging artificial intelligence alongside our own capabilities, we can enhance our learning experiences and develop deeper empathy for others.
Whether you're a student, educator, or professional, consider how you can integrate these technologies into your life to foster curiosity, creativity, and collaboration.
This approach not only enriches your understanding but also empowers you to become a more effective contributor to society, bridging gaps and inspiring innovation in your community.
If you would like to stay tuned with our future guests and their magical stories. Welcome to join us.
-
-
Cao, L.Y., Wegerif, R., Hennessy, S., & Dede, C. (2023). Developing Teachers’ Contingent Responsiveness in Science Discussions With Mixed-Reality Simulation: A Design-Based Study. Proceedings of the 17th International Conference of the Learning Sciences - ICLS 2023.;
Chen, J.A., Tutwiler, M.S., Metcalf, S., Kamarainen, A., Grotzer, T.A., & Dede, C. (2016). A multi-user virtual environment to support students' self-efficacy and interest in science: A latent growth model analysis. Learning and Instruction, 41, 11-22.
-
Dr. Chris Dede is a distinguished educator and researcher at Harvard Graduate School of Education, where he has served as the Timothy Worthy Professor for over 22 years. With a career spanning more than half a century in educational technology, Dede specializes in the intersection of artificial intelligence and immersive technologies, focusing on their impact on human learning and skill development. As the co-principal investigator and associate director of research at a national science foundation institution, he is dedicated to advancing the understanding of how these technologies can enhance online learning experiences. Dede's innovative work, including projects like EcoMod, showcases his commitment to empowering learners through engaging, data-driven environments that foster critical thinking and empathy.
-
Jiani (00:02)
Welcome to MAGICademy Podcast Today with us is Professor Chris Dede Finally, it's such a pleasure to have you. He has been a senior researcher in Harvard Graduate School of Education. He has been a Timothy Worthy professor for over 22 years. And currently he is a co -principal investigator and associate director of research of national science founded
institution focusing specifically on artificial intelligence, emerging technologies as it impacts human and adult learning and online learning. And his specialty is in immersive technologies and artificial intelligence. And there's going to be plenty of stories that we can share, which are rich in evidence and insights. So it's such a great pleasure to have you, Professor Chris, with us today.
Chris (01:01)
Well, thank you so much for inviting me and I'm looking forward to sharing ideas.
Jiani (01:06)
Perfect. the first question I ask this just to kind of make us happy or bring us out of our current perception. So the first question is, beep beep. In front of you lands a spaceship and out walks a friendly little alien. Friendly. And if you were to use one word, one sound or one movement to introduce yourself.
What would that be?
Chris (01:38)
The one word would be xenophilic. Xenophilic means that you value things that are different than you are. Xenophobic is when you're afraid of things that are different than you are. And we see so much xenophobia across the world, so many ethnic conflicts, but I'm definitely xenophilic and I'm happy to see an alien.
Jiani (02:01)
Beautiful.
Why immersive technologies and artificial intelligence? Out of everything that you can study to empower human learning and skill development, why those two areas really kind of got your attention and passion and dedication?
Chris (02:27)
Well, I've been in the field of ed tech for more than half a century. So of course I've seen a lot of things come and go, but AI and XR virtual environments have been two major themes for me. XR because classrooms are very uninteresting and barren places and you can be somewhere else mentally, even though physically you're still in a classroom when you're in virtual environment.
AI because AI offers the opportunity to complement our strengths with the strengths of computers. And what's exciting to me, and I tell my students this, is after many decades of studying both of these, now they're affordable, now they're practical, and so things are opening up that haven't been true in the half century that I've worked in the area.
Jiani (03:26)
Beautiful. I appreciate how you focused on enhancement because there have been a lot of conversations talking about how artificial intelligence needs to replace human force workforce. And I think this enhancement is a critical piece that would potentially direct us to a more
pro -social and pro -human environment.
Chris (03:54)
Yes, I study even more than AI. I study IA, which is intelligence augmentation. And that's what happens when a human being and an AI work together and each does what it does well. The whole is more than the sum of the parts. We see a lot of this in science fiction. So if you're a Star Trek, the next generation fan, Captain Picard is the human data who looks like a human being is actually an Android and artificial intelligence.
And the two of them do IA together in really interesting ways.
Jiani (04:26)
Beautiful. And we will be curious if we were to kind of just come in front of more foundational perspective. What is the core foundational difference?
between immersive environment augmented by artificial intelligence and a physical 3D environment that everybody lives and breathe in, especially as it relates to skill development, human develop. Are there any foundational benefits?
Chris (05:06)
Yes, they're definitely foundational differences. And actually the virtual world and the real world, if they're well designed, complement one another. It's not that one is better than the other. It's that the two together provide a kind of both and. So let's take something like learning negotiation. When negotiations taught in the real world in classes of different kinds, you find a human being to practice your negotiation with.
fellow student, maybe the instructor hires an actor, maybe you go home, try to negotiate with your children, whatever. But it's just that one person that you're negotiating with. In the virtual world, you can have many kinds of digital people that you can negotiate with. People of different genders, people of different ages, people of different ethnicities. So there's a broader range of perspectives. Also in the
real world after you negotiate, it can be hard to remember exactly what you did. But in the virtual world, we can collect very rich data streams of exactly what happened. And then we can use machine learning, part of artificial intelligence to analyze patterns in that and to provide feedback about those patterns. The virtual world is also often more affordable than the physical world.
because you're not having to create a whole context. you're learning, for example, how to navigate a hospital, you can be in the physical hospital. That's complicated. You can be in the virtual hospital. That's much simpler and more scalable. So I think that what we're looking at in many ways is training in which the early training happens in virtual worlds.
But then when you get to needing the full bandwidth of human interaction and all the rich subtleties of human interaction, that's when you need the real world.
Jiani (07:05)
I love that. I appreciate like in the virtual reality environment, the artificial intelligence can really leverage all the data points that it collects to enhance decision or interaction or design. And sometimes people say when thriving artificial intelligence is really kind of a thriving database. And sometimes we think, we may not have enough data, but if we look at a human being and all the data points that a human can
can generate in an immersive environment, it's like a lot. So we won't have a deal.
Chris (07:40)
Yeah, let me let me show you an example if I can.
So if you're curious about that, I'd encourage your viewers to take a look at the website for AI Allo, where a lot of these tools illustrate
OK. So the Institute that I'm associate director of research for is about workforce reskilling and upskilling, and we're developing many kinds of tools that illustrate what I'm talking about with IA intelligence augmentation. So there's a tool called Smart learner adult learners, whether they're in the workplace or the.
A course may need to learn a new concept that's quite complicated. And they may, they can demonstrate their learning by writing a summary. And then the AI, smart, doesn't write the summary for them, but it evaluates their summary and provides feedback on how to make it better. So that's an example of one of many kinds of tools that
are being developed in our institute that illustrate this idea about IA.
Jiani (08:52)
Very, very interesting. that gets us curious. Maybe let's dive into maybe case studies, specific use cases. So in one of your research, it's called Eco -Mob, and where learners in that space become scientists, and then they start kind of
go around collecting data from various sources, analyze data and practice as if they were scientists and produce findings. Can you share with us exactly how the study was, how was the study designed? What's the major discovery from that study?
Chris (09:37)
So Ecomod, -O -D, is a little like a video game, except that instead of entering a fantasy world, you're entering an authentic simulation of a real ecosystem. And my colleagues and I built four XR -related projects with ecosystem science, and Ecomod was the last of the four. And we'd been working with middle school students, and we said, let's go down to third graders.
Let's see what third graders are capable of doing. So you can imagine third graders in a classroom setting going into a virtual ecosystem, each one on their machine, and wearing the shoes of an ecosystem scientist where they wander around the ecosystem. They find animals, they find different kinds of natural settings, and they try to understand what's happening in the ecosystem over time.
But we also said, let's build another interface for computational modeling. So one of the things, one of the kinds of magic, if you will, that can happen in Ecomod is you can become the beaver in the ecosystem. Now you are the beaver. You're walking around as the beaver. You have to stay away from the wolf. You have to cut down a tree with your teeth. But then when you come out of the ecosystem, you say, now build a model of the beaver.
Jiani (10:50)
you
Chris (11:02)
build a computational model and you have to think about what you did and then build that into your computational model. So we thought this is really hard. I mean, this is complicated for third graders, but you know, we want to see how far they can get. Well, they ate it up and wanted more, all of them. And that was our finding too from the three prior curricula that for middle school students, they ate it up and wanted more.
Jiani (11:10)
Yeah
Chris (11:30)
It's sad that we put a glass ceiling over our children because those elementary school students were doing middle school and even high school work. The middle school students were doing high school and even early college work. And if we just use the technology to turn the students loose, we could be so much farther along.
Jiani (11:52)
I, that would be a very interesting idea is really leveraging the immersive technology as an empowerment tool, building environment for them to explore, course, under safe guidance and everything, then assign them different challenges or requests. And they can, as far as this immersive environment allowed to explore and develop. And we'll be curious, like if,
Just to kind of clarify on the study a little bit. So when you say build a model for the beaver, do you mean kind of program or tell the system how should the beaver behave in the environment? that?
Chris (12:40)
Yes, yes, we set up another little simplified virtual environment. I don't know if your viewers are familiar with the programming language called Scratch, but it was developed at MIT. It's very widely used in elementary education for computer science. And you get little very simple virtual worlds. And the students could build a Beaver program using block -based programming, which is what Scratch is based on. And then they could
run the beaver program in the little simplified virtual world and see if it behaved in the way that they expected. Walking to a tree, cutting down the tree, staying away from the wolf and so on. So it was really fun to watch them creatively think about what they did when they were the beaver in the simulated virtual world and then try to program something that would behave in the same way in the little artificial world.
Jiani (13:36)
I really appreciate that. I would imagine it would be so fun to be a beaver and start kind of chewing down trees and woods and building. just, it doesn't, it creates like a sense of awe and wonder because we as human beings, never get into their perspectives. And it's such a kind of authentic perspective taking experience for the kids.
Chris (14:02)
Well, that was part of our goal really with the whole set of four curricula is yes, we wanted to teach science ecosystem science. Yes, we wanted to teach science inquiry. In the case of eco mod, we also wanted to teach computational modeling. But over all of those, we wanted these students, many of whom may not have ever been in a natural ecosystem if they live in a city to to see the beauty of nature and the wonder of nature.
and the fragility of nature and to become better stewards of nature. So there were a lot of affective goals as well as cognitive goals.
Jiani (14:38)
Beautiful. think that's one of the major benefits of an immersive environment is some people call it the empathy engine is to develop empathy and perspective takings. we were to, because there's this concept of artificial intelligence, avatar and coach or companion.
Chris (14:54)
Yes.
Jiani (15:03)
If we were to implement the artificial intelligence into the artificially immersive environments, how would you, what would be different? Yeah.
Chris (15:21)
Yeah, so when we developed the four ecosystem science curriculum between 2008 and about 2019, we were just using what was possible in games at that time. And if you play a video game, you can interact with people, but it's a very structured interaction. There's limited things you can say to them. There's limited things they can say to you before a gen AI.
And now if we were to rebuild those somehow or extend those. The people that they met in the ecosystems would be able to have very rich conversations with them and very open ended conversations with them compared to what we were able to do 10 years ago. And that's very exciting because it's not only part of learning makes the learning more authentic.
But it's also useful for assessment. So before the ecosystems from about 1999 to 2009, and colleagues built another virtual environment called River City, where students went back in time 100 years and they learned about epidemiology and they learned about scientific inquiry. And one of the things we had in River City was a character called the wise fool. It would come up to a student.
Jiani (16:26)
Mm.
Chris (16:45)
about Midway and it would say I'm a newspaper reporter. I've got to turn in a story in two hours about what's happening in the hospital and I can't figure out what's going on. Can you help me out? And so learners wouldn't feel as if they were being yanked out of the game for an assessment. It was just part of the game and they tell the reporter what they thought was happening and we had a crude way.
of looking at that and if they knew what was happening, the reporter would say, great, thank you so much, you saved my job. And if they were lost in terms of what was happening, the reporter would sort of change and the reporter would say, well, come with me and let's see if together we can figure out what's happening in the hospital. Now that was cumbersome 20 years ago.
With GenAI, again, you can imagine that being a kind of stealth assessment, embedded assessment that could be really powerful.
Jiani (17:43)
I love how you guide us to see assessment can potentially be just a simple, lending a helping hand for someone in a virtual reality. Just as simple, as easy, as human as it can be. Will the artificial intelligence go beyond the human and the avatar?
level and going to the immersive environment level, will AI be able to shift their environments by trying to meet where they are by their current progresses? Is that even possible?
Chris (18:28)
It is, and people are studying that now. Many groups are studying that, but I'll give a couple of examples from my own work. So, one of the companies that I'm an advisor for is called immersion. And it's a company that does what's called digital puppeteering. So if you interact with a chatbot, you're interacting with an AI. If you interact with immersion avatar, you're interacting with something that looks like.
A digital person can have many different ages or appearances, but there's actually a human being behind the mask of the digital person. And it takes a lot of hardware and software and, AI for the human being, a simulation specialist to play say four or five or six people at the same time. So let's say you're negotiating for a raise, you're learning how to negotiate and
sitting opposite you in a virtual conference room are three different virtual people, different ages, different genders, different appearances. They're your three bosses. The simulation specialist can play all of the bosses since only one can talk at a time. And so you can learn negotiation in a really rich way. So before Gen .A .I., that was work that I did not only in negotiation, but
with a colleague, Rhonda Bondi, we looked at how to help teachers have equitable conversations in classrooms. So the teachers would go into a virtual classroom and there'd be five children sitting across from them. And each of the five would be having a different problem with reading. And they had five minutes to say things that would help each child with the different problem that they had.
Jiani (20:16)
Hmm.
Chris (20:20)
And teachers found that an enormously valuable practice environment. It's kind of like airline pilots using a flight simulator to practice things that have to be done very quickly. And you really need to just have it routinized in terms of how you think about it. In terms of the other part of your question, in terms of inclusion, I've worked with Professor Nicole Mills here at Harvard, who is
Jiani (20:42)
Mm.
Chris (20:48)
head of the Department of Romance Languages and who teaches the introduction to French courses. And languages are really about culture and context. They're different ways of thinking. They're not thinking the same thing using different words or sounds. They're different ways of thinking that had grown up in a culture and context. And so she teaches French from that perspective and she and
a company called Wanda VR that I'm also an advisor to, created a simulation that was a murder mystery. So you're a detective like Columbo. There's been a murder in a famous surrealist art museum in Paris in the Centre Pompidou. You show up there and you have to interview different suspects and decide who did the murder. And the suspects are chatbots. So each
Jiani (21:19)
Mm.
Chris (21:45)
Chatbot can authentically play the narrow role of just what was the subject doing at that time during the murder very well. And they, of course, can speak French. And so you're practicing all sorts of interesting challenges in speaking French to do this interrogation. So those are just demonstrations of the kinds of rich learning by doing environments that we can create now.
combining AI and chatbots, sometimes with a human behind it, other times with an AI behind
Jiani (22:17)
If we put a human behind it, would be, how, what's the, the role or responsibilities for the human and also for the, for the AI? Does the human kind of trying to adjust the AI's responses or how, does those two kind of play in like there's a real human behind the AI avatar?
Either or.
Chris (22:46)
It's really an either or. So the human is behind an XR avatar or the AI is behind an XR avatar. And GenAI is much more limited than most people think it is. Most people think that GenAI is like Hollywood AI. It's like as wise as a human being.
And that is simply not the case. The structure of large language models means that they're very limited in terms of what they can do, but what they can do well is very important for learning. And this isn't going to change. I know there are lots of people who will watch this who will say, yeah, Chris, they're limited now, but wait a year, wait five years and $10 billion and they'll be like.
artificial general intelligence. I don't think so. I remember when I first heard that dogs could sing. I was really excited. I went to YouTube. I looked at these videos. Wow, dogs were singing. But then I realized that dogs didn't sing very well by human standards. And you could spend 10 years and $10 billion trying to teach dogs to sing well. And it wouldn't work because dogs don't have the structure that enables
human vocalization and large language models do not have the structure that enables artificial general intelligence. so going back to the specific example, what does this mean? Let's say that you're, you're a counselor, you're learning how to counsel people who've lost a loved one. And so somebody is simulating,
the person who's lost a loved one and you're trying to understand how to help them. A chat bot could do some of that. You could say to the chat bot factual things like, have you thought about funeral arrangements? Is there a will? And so on and so on. But for emotional things, you would have to have a human being behind the mask.
so that you could say, know what, tell me about the grief that you're experiencing. Do you know about the five stages of grief and what stage are you in? A chatbot has no emotions. A chatbot can't simulate something like that. So in looking at any kind of learning that's learning by doing, practicing a complicated skill, the parts that are simpler and more cognitive, we can fake with a chatbot.
the parts that are more sophisticated and that draw on emotional and social knowledge, we need a human being behind the mask.
Jiani (25:42)
Beautiful. Looking to the future, what would be the ideal or the best version of the future you can envision with all the technologies and their limitations at the same time kind of evolving such as virtual realities, artificial intelligence, web 3 or neural links, human brand and machine connectors and all that.
interesting technologies, which I'm still trying to wrap my mind around.
Chris (26:23)
Yeah, well, it's difficult to map the mind around because, as I said, I've been doing this for more than half a century and I've never seen a time like this when so many things were becoming powerful simultaneously. And the one we haven't talked about is online learning. compared to eight years ago, the world has advanced about 25 years in online learning because of the pandemic.
infrastructures that wouldn't have been built or would have been built very slowly had to be built immediately. I did a keynote in Tibet earlier this year, virtually, and I didn't even have to ask if they had online infrastructure and video conferencing software. Of course they did because of the pandemic. And our knowledge of how to do online well has been deepened because of
the pandemic and the lessons we learned about not just information at scale, which is what MOOCs do, but engagement at scale, which is what MOOCs lack and what online learning requires. So what's the big picture? First, I think it's an ecosystem. So it's many different kinds of learning environments that are all linked together.
And this is in contrast to visions like the metaverse where you say, everyone's going to go into virtual world and they'll do everything in the virtual world. That's not going to happen. That's not even a desirable future. But the idea that there's an ecosystem and sometimes you're in the real world and sometimes you're in some different kinds of virtual worlds like the practice world we've been talking about. That makes sense. There's a framework for online learning called communities of inquiry.
And it basically says that cognitive presence, teaching presence, and social presence are all important if you have a community that's trying to figure things out together. I don't have time to describe that in more detail, but the point is that in each niche in the ecosystem, there would be a community of inquiry where there were ways that technology could create those three kinds of presence. And then the niches would be linked together.
Jiani (28:29)
Hmm.
Chris (28:44)
so that as you moved around inside of this overall ecosystem, you would carry with you things that you learned to now learn them deeper in a different niche. And also the data would be saved across all the niches in the ecosystem so that you could do personalization and so that AI could have the kind of big data that it needs in order to be effective.
Jiani (29:08)
Mm.
Chris (29:12)
That for me would be a very exciting future and I don't see technical barriers to that future at this point. There's no miracle that has to happen for that future, but it's a very different way of thinking about the evolution of technology than we're used to. And so while it's an aspirational vision, it's not a prediction that I want to make.
Jiani (29:29)
It's one of the educated guesses. there, along with that beautiful vision, are there any key challenges or risks or potential concerns that we need to be mindful early on so our ability to be mindful early on can potentially increase the chances that we step into the future that we really want to?
Chris (29:39)
Yes, exactly.
You know, I worry about how people are reacting to generative AI, assuming that it's capable of so much more than it actually can do. And there's an explanation for this. Way back when I was a graduate student, there was a professor at MIT named Joseph Weizenbaum. He was an expert at that time in natural language and AI. And he built a little computer program called ELISA, named after ELISA Doolittle in Pygmalion.
And Eliza simulated a human therapist using a kind of therapy that was popular at the time from Carl Rogers, where the therapist basically reflects back to the patient what they said. So you could talk to Eliza and you would say, I had a terrible day today. And Eliza would at that time, you know, over half a century, you go, would basically type back to you on a computer screen. I hear your day was really difficult. And then you would say,
My boss is so hard to work with, and Eliza would say, I hear that your boss is part of the problem. Now, Eliza was nothing but a bag of tricks. There was not even AI in it. It was just different kinds of text recognition and pattern matching. And Weitz and Baum wanted to show what you could do just with that. But he was horrified to find that people would spend hours and hours talking to Eliza.
and thinking that they were getting real therapeutic advice when Eliza didn't understand a single thing that was being said to it and was just tricks in in reflecting words. So what came out of that was a concept called the Eliza effect. And the quick summary is one of the very powerful things that we have as human beings is that when we're born,
We're our brains are wired to listen for language. And we perform when we're very young, perhaps the most complex learning in our entire lives, which is listening to language, starting to comprehend the language and participating in a language. But that prioritizes language. And when we hear language from a chat bot.
The Eliza effect means we think that the chatbot is smart in the way that a human being is smart, but it isn't. So overcoming the Eliza effect and keeping ourselves out of misrepresenting generative AI is, I think, a very important challenge right now.
Jiani (32:39)
Yeah, because sometimes when you talk with the AI, you really get the sense of like, yes, this data really understands me. And I think also in this like pandemic of loneliness, when people are get ability to talk with some source and that reflects what you've been talking and acknowledging what you've been talking, you do feel like as if like some communication are being built.
but I think because they do not understand exactly what that means. So maybe the memory, maybe they would forget something. Or if you were to ask them, in addition to just repeating back from what you say, it would be difficult for them to kind of give you a big picture, compounding, advice or suggestions, rather than just.
repeating back or maybe forgetting or missing some of the data that you've used.
Chris (33:51)
Yes, my institute, the institute I'm part of works very hard with guardrails of different kinds for chatbots. And even then, if you have a long enough conversation with a chatbot, 20 exchanges, 30 exchanges, it'll start hallucinating all over the place. So it's a limited but powerful technology.
Jiani (34:06)
And I also heard someone say that the brand sales that we have or the neural connectors that we have comparing with the current infrastructure of all the data points that a whole AI system can collect. It's like, it's not even comparable. It's like one is like this, this is like AI, this is like the human brain. it's like.
Chris (34:31)
Yeah.
Jiani (34:34)
Beautiful. As we move into the magic portion of our conversation, I would like to try to give a brief recap of everything we've talked about. We've talked about the importance of emerging technologies such as virtual realities and the integration of artificial intelligence. in Professor Chris' perspective, it's more IA rather than AI because
Chris (34:35)
Exactly, exactly. Yeah, we're a long way from artificial general intelligence.
Jiani (35:04)
we ultimately need the tools to enhance human beings. So IA refers to intelligence augmented through technologies such as artificial intelligence. And we also explored a few case studies. One of the research is called EcoMob. So it's where students and learners and explorer go into the virtual environment and go into different perspectives such as
be a beaver and start building your first home by the water. And the kids and the students are able to get perspectives, not just from another human, but actually from one of the animals in the bigger ecosystem. The experience was, the research found that it was really amazing. The kids are so happy and so hungry to learn more. that makes us realize that maybe
by the current educational system where we're trying to put everybody on the same page and mass educate has its glass ceilings. If we were, there's one possibility where we put kids in a carefully curated and designed virtual environment where they get to take perspectives going beyond their own and also going beyond the human perspective, they really will be able to learn and do more higher level thinking than it.
exceeding our widest imaginations. Professor Chris also shared some examples such as immersion, there are use cases where artificial intelligence paired with virtual reality, we can actually interact with AI avatars and to do very interesting explorations, learning and skill development.
However, there is a catch. Sometimes we still need humans behind those avatars because the limitations of artificial intelligence is still there. It's what Chris was telling about. It's the Eliza effect. It's when we talking with the AI and the AI repeat what we've talked, maybe throw in some facts and information, we would think that the AI is really like a human. However, the Eliza effect really shows that
we sometimes perceive what we think is human is actually not human. They're only programmed to do that. we're still far, far away from the general or artificial intelligence where AI is able to compound the information and generate advice and suggestions and humanly empathize with what really is happening. So there's a huge gap right there.
We also explored the potential guardrails. That's what Professor Chris has been leading in the artificial intelligence institutions to try to put those guardrails there to impact our policymaking, to impact the leaderships. So it's such a wonderful, beautiful future.
potentially can step in with people like Chris to help us put us on the right track moving into the future. Am I missing anything?
Chris (38:38)
Well, thank you. That's a great summary. And if we fed it into the smart tool that I talked about, I'm sure you would get a high score for doing that.
Jiani (38:40)
Thank you. Beautiful. The magic part. when you were 11 years old, what did Chris enjoy playing and creating so much that time just disappeared for you?
Chris (39:08)
So when I was about that age, my parents gave me a chemistry set and I was really fascinated with combining different kinds of things and having interesting stuff happen. And not only did I do all the experiments they talked about, I mixed all kinds of things together that they didn't talk about just to see what was going to go on. So that for me was a very evocative environment.
Jiani (39:31)
That's great. think the only chemistry equipment that I've tried is... I'm not gonna say that because compared to your passion, I probably was just saying very basic things. I'll just pause that. What role does childlike wonder play in your...
professional and adventurous and leadership roles throughout your life.
Chris (40:12)
So I'm very easily bored. I get bored incredibly easily and I hated school because I'd sort of sit in the back and I'd read the textbook in the first couple weeks and then I had the whole rest of the year stretching out in front of me. That was why I went into education is I wanted to somehow improve education so people wouldn't be so bored. But
I've found the perfect field for me in learning technologies because just when I think I understand what's going on, new developments come along. Online learning, artificial intelligence, advances in XR, and I have to reinvent myself and start all over again. So it's the perfect field for somebody who's easily bored. And I bring childlike wonder all the time because my field is constantly changing.
Jiani (41:01)
I love that. It's kind of your life's calling or the Iki guide you found for that. And by you having that childlike sense of wonder, the research that you're doing, the policies that you're making will really help the rest of us and everyone else to really kind of see clearly and see to the core essentials of things through the evidence -based approaches that you've been working on.
Chris (41:15)
Exactly.
Jiani (41:31)
So it's your challenges that you feel like unheard? What was your challenge on this path?
Chris (41:50)
I'm fundamentally an introvert and, but to share ideas, you have to be an extrovert. So I do a lot of my sort of thinking and inventing in relative isolation. But then I sort of, it's like a rock and roll band. go on tour and I
I'm very extroverted and I share ideas as I'm sharing with you in this podcast. So I have sort of a natural personality and then an acquired personality that I use to be able to bring my ideas to the world.
Jiani (42:26)
I love that. I think I resonate with you on that. Like even though you see me kind of broadcasting here in the space, I think I'm more of an introvert exploring ideas in a quiet space is like one of the most beautiful things that I think a human being can do. And however, I think the world needs to hear.
the wisdoms and the ideas. So appreciate you by acquiring this external -facing personality. So what do you think is overall is your magic?
Chris (43:19)
We have a wonderful master's program at the Harvard Graduate School of Education. And pre -pandemic, so years ago, the master's students at the end of the academic year did something very fun. They said that the professors as a group were kind of like the Marvell universe. And they gave each one of us a superpower that they said was our superpower.
And the superpower they gave me, and I think it was a good choice, is they said, they called me Vision. That was my super name. And that I was really good at taking aspirational visions, like the vision of the ecosystem of learning, and then making that real and compelling for people. So I do think that in the course of my career, my ability to
Jiani (44:13)
I love that. And I feel so grateful that we're now witnessing part of the magic, hearing the visions that you foresee and hearing the case studies that born out of the vision and your magic. we just, more of us are aware of our superpower or the magic and to make, to live in a world where
Chris (44:16)
develop and share aspirational visions has been kind of the superpower for me.
Jiani (44:40)
we all find our Ikigai in helping each other and helping the whole collective consciousness and action and implementation to move to the next level in a benign and kind way. So good to have you, Professor Chris, sharing.
Chris (45:11)
Well, thank you and I'm grateful to you for having a podcast, but having a podcast that talks about magic and that has a strong set of values associated with it. So thank you for inviting me to be part of this community.
Jiani (45:17)
Thank you, Professor Chris.