Digital Charisma: Artflow on AI-Amplified Storytelling & Branding

 

00:00 The Inception of Artflow AI

04:11Democratizing Creativity with Artflow AI

08:27 The Three Elements of Visual Storytelling

11:25 Use Cases of Artflow AI

20:56 AI in Scenario-Based Learning

25:57 The Future of AI in Movie Making

29:46 Controlling AI Output and Prompt Engineering

35:57 Future of Content Creation and Storytelling

44:54 The Magic of Imagination and Creativity

 

💕 Story Overview

Imagine a world where anyone can conjure vivid visual stories at will for skill development or for creative branding, where AI becomes the paintbrush of the masses. This is the vision @Tim Zhang, the mastermind behind @Artflow AI, brings to life in this captivating conversation in #MAGICademy S3E2.

Story Takeaways

  • Visual storytelling hinges on three core elements: characters, locations, and actions. Characters drive emotional connection, locations provide context and atmosphere, and actions propel the narrative. By leveraging these elements, AI has the potential to empower users to create rich visual stories across various mediums, democratizing digital storytelling and opening new avenues for creative expression.

  • Creating consistent characters across different contexts and locations is a significant technological challenge in AI storytelling. This requires advanced natural language processing and generation models that can maintain a character's personality traits, speech patterns, motivations, and background knowledge across varying scenarios. 

  • To better control AI creative output, systems need to develop the ability to ask follow-up questions, enabling a more nuanced understanding of user intentions. This interactive approach would allow AI to clarify ambiguities, explore alternatives, and gather crucial context, ultimately leading to more accurate and satisfying results that align closely with the user's vision.

  • AI will never fully replace human creative expression because it lacks the lived experiences, emotions, and cultural context that drive true innovation. It cannot replicate the intuition, personal struggles, and emotional depth that fuel authentic human creativity. AI remains a tool to augment human expression, but the soul and unpredictability of human-created art will always be irreplaceable.

  • Tim’s MAGIC: Imagination with adventures

  • Tim's magical drive to create novelty: https://youtube.com/shorts/Pg60g0p2E2Y

    The potential of ai-empowered storytelling to mitigate human bias: https://youtu.be/FIxZrib29i8

    As life 2.0 humans are always updating our brain software: https://youtube.com/shorts/ZY23Jav2NGg

    AI as a content creator with human in the driver's seat: https://youtu.be/3oNiBiaH3f0

    Easily wild imagination in early childhood: https://youtube.com/shorts/_dOMXVFlpy8

    Visual storytelling (character, location, action): https://youtu.be/dKlT996D-6Q

    Expanding human perception through AI:https://youtu.be/Zp17ZC-tZek

    • Nguyen, T.T. (2021). Storytelling and Imagination. Storytelling Pedagogy in Australia & Asia.

    • Fotaki, M., Altman, Y.H., & Koning, J. (2020). Spirituality, Symbolism and Storytelling in Twentyfirst-Century Organizations: Understanding and addressing the crisis of imagination. Organization Studies, 41, 30 - 7.

  • Tim Zhang, founder of Artflow.ai, is empowering individuals to enhance their personal branding through AI-assisted visual content creation. His platform enables users to easily produce professional-quality visuals, allowing them to stand out in the digital landscape. By democratizing design capabilities, Tim's work helps people become "super individuals" who can effectively showcase their unique identities and skills across various online platforms.

    Linkedin: https://www.linkedin.com/in/yuxuan-tim-zhang/

    https://app.artflow.ai/ : Upload your pictures, create your unique identity, and craft one-of-a-kind image and videos.

  • Jiani (00:03)

    Welcome to MAGICademy podcast. Today, our guest is Tim Zhang, and he is the co-founder of Artflow AI, a personal movie studio leveraging AI's power make your own movie and tell your own stories. Pretty cool idea. Welcome to join us, Tim.

    Tim Zhang (00:23)

    Thanks, Jiani It's a real pleasure to be here.

    so artflow.ai is a platform that allows everybody, including those who do not have the resources and expertise to create in the traditional way, to really allow them to have this sort of a movie studio on the go where they can create their own actors and put those actors in different scenes and then really telling stories around those.

    people in events. So essentially we're trying to create something to allow ordinary people to tell their own stories in the visual manner. So why do we create such a tool? I guess everything originated from, I guess that the initial spark coming was coming from my younger age that I, I have been a storyteller myself when I was really young. Like I have tons of wild imaginations.

    in my head. There was once a time that I, you know, filled a lot of paper planes and lay them on my bed and imagine, okay, this is my space fleet and they're going to embark on a great journey and having some sort of a war in space, right? Like that sort of story. I had a lot of those. But the thing is that those stories have always been stayed in my mind.

    I find it really challenging to communicate those to other peoples. So it's sort of suppressed on that prompt. So the vision for Art Flow is really to enable people like me who wanted to tell a story but don't really have the expertise and resources to do this.

    I guess back in 2018, when I was still working at Wayfair, I had been working on the generative technology. But at that time, it was more along the lines of furnitures, how to do the interior designs. There's one day that I noticed a few new papers coming out, which really allowed people to generate very, very realistic faces of people. And that triggered a thought that

    Jiani (02:47)

    Mm.

    Tim Zhang (02:50)

    hey, maybe this could be the starting point for people to create their main characters for the stories and position those characters in a certain way and let those characters talk in a certain way. So that was the very initial beginning where we think, okay, this whole thing might become feasible. So we sort of monitored how the technology progresses and-

    Jiani (03:12)

    Hmm.

    Tim Zhang (03:18)

    By the time of 2020, we felt like, okay, that's where we see all the major components are mostly ready. And that's when we sort of decided to, okay, now it's probably time to do our flow and make it happen.

    Jiani (03:26)

    Hmm

    So it's kind of like a lifelong project in making and the technology finally caught up with you finally get to integrate that into building your childhood dreams.

    Tim Zhang (03:51)

    Yeah, I think for every kid, we all have things that we wanted to do, but can't really see them happen. And technology is such a great enabler to allow kids, and even adults, to achieve what they couldn't do before. So yeah, that's absolutely true.

    Jiani (04:07)

    Mm.

    Yeah. So talking about like democratizing the power of creativity, the power of imagination, how does art flow exactly help people to do that? Yeah.

    Tim Zhang (04:27)

    Yep. So we focus around the concept of visual storytelling. And we think, if we think from the first principle perspective, there are three basic elements that make the visual storytelling possible. That is, for every single story you've got to have characters. And they're got to be...

    You've got to put these characters into certain locations and this character has got to do something or form some sort of a relationship, some drama to really unfold the story gradually. Right. So in short, it's like who, where and what those three elements. And that's where we tap into solving this problem. We allow people to create any...

    characters they can think of, right? Like different professions, different ethnicities, different genders, different age. And the good thing about this is they just need the text to describe what sort of character they want, right? Like if I want a Caucasian 50 years old chef, I can just describe like this, and we're gonna generate a preview of the character. And this character can be later used in the storytelling creation process.

    So that's the who part. And the where part is that once you have your characters created, you can actually put them in different scenes. So imagine all those different movies. They're all shot at different locations, indoor, outdoor, some on the ship, some inside of a house, wherever. So it used to be pretty difficult to.

    Jiani (05:53)

    Mm.

    Tim Zhang (06:19)

    get those locations, right? You know, people got to scout different locations, got to see, okay, if the lighting is correct, if all the props are laid out in a proper way. And it's really time consuming, and it's also very costly. But now with AI, all it takes is just your imagination, right, you just need to describe, okay, this is a haunted house, and it looks like this, and it's located in this location, and just describe like this.

    Jiani (06:35)

    Mm.

    you

    Tim Zhang (06:47)

    in textual manner, and AI is going to help you generate that. And the good thing is that you can actually put your character inside of that location you created. So that's the first two, who and where. And when it comes to what, that's what makes the storytelling more interesting. So we think at the moment the most feasible way to let people.

    Jiani (06:52)

    Hmm.

    Mmm.

    Tim Zhang (07:16)

    do this is to let their character start speaking, to really have the narration going to tell the audience what has happened. So we have the technology to turn a static image into a talking avatar. You're going to see... It's definitely still look sort of robotic because that's where the technology is at the moment. But...

    Jiani (07:36)

    Mm.

    Tim Zhang (07:44)

    you can see this image all of a sudden becomes alive, right? So this person will have some natural body motion, some natural head motion, and it's just gonna speak like this. And later on, what's gonna happen is that we may add more dynamics into the scene. Like probably we can add more hand gestures like this, or even strong facial expressions to.

    Jiani (07:47)

    Hmm.

    Mm.

    Tim Zhang (08:10)

    to express certain emotions, right? So those are all gonna help telling the story in the right way. But those will be coming at a later stage. So who, where, and what is how we help our users to achieve this.

    Jiani (08:20)

    Mm.

    And then maybe later on we can integrate the hero's journey and so at every stage who, where, what kind of going in circles.

    Tim Zhang (08:36)

    Yep, yep, exactly, the arc of the character.

    Jiani (08:43)

    Hmm. Well.

    So for now, we see their faces and it's basically like half body. Is there like full body and can they now do dancing or not yet? It's just like a face and upper half.

    Tim Zhang (09:02)

    Right, so we started out doing just the head portrait sizes, and then later we slowly progressed to what we can do now, which is you can actually get your full body. You can get your full body, you can position them in different ways. Let's say someone is just standing there eating a ramen, or someone is sitting there reading a book. It's all possible.

    Dancing is a sequential movement, right? So that is difficult at the moment, but we already see signs to make that possible. So yeah, hopefully it'll become a possibility real short on our platform.

    Jiani (09:47)

    Yeah, this is also kind of an improvised question. What really makes it transform from only a profile image to a full body, then to a full body in movement? Is that a different type of algorithm? Is that a different kind of data set that Artful is using? How...

    what's the reason behind it.

    Tim Zhang (10:18)

    Yeah, right. The key driving factor is obviously the technology itself. So whether or not to introduce certain product features, it's based on two things, right? One is the technical feasibility. And the second thing is whether it's economically viable for us to do this. So we're going to make sure the generated output looks decent.

    is high quality enough. At the same time, it doesn't cost as much cost for us to generate those so people can actually afford those things. So juggling those factors together makes some features be able to be released sooner, some later on. And for the full body, the reason why we didn't release in the first place was because simply it's just not good enough. It's sort of distorted, it's not high quality, it's kind of blurry.

    But now we see things get better and better, so that's how things went.

    Jiani (11:25)

    there are a lot of image processing, AI-powered processing platform tools out there. What makes Artflow AI different? What makes it sticky for its users?

    Tim Zhang (11:42)

    Yeah, so for our flow, we intentionally position ourselves as a platform for visual storytelling. So storytelling is a key factor for us. We allow people to create images and videos around the concept of storytelling. So there are, and in comparison to other platforms like Majorna, Vionado, those are probably more like generic image generation platforms where people can get really high quality, beautiful images.

    but not necessarily tailored, optimized for storytelling. So if I can give one specific example, is that to tell a story, you gotta have your characters being consistent across different scenes. That's a challenge for AI, because AI, even though you give it exactly the same input, every single time it gives you something that's different. And that's really good for brainstorming, to think out of the box, but that's not good for storytelling because you wanna make sure your audience...

    sees the same character throughout the whole story so that it doesn't break out the immersion. So that's why we spent some extra effort to tailor our website to allow users to create consistent characters. And later on, we're going to introduce not only consistent characters, but also consistent locations. So this bedroom looks like, in a certain way, no matter when it's generated from different

    camera perspectives is all the same bedroom. So that's important for a user to understand the plot and be immersive. That's sort of the difference.

    Jiani (13:23)

    This is very unique. You're helping users to create consistent locations, consistent characters, and potentially consistent actions across in a consistent environment with a consistent body. So very interesting. Great.

    Tim Zhang (13:43)

    Yep, and probably in addition to that, being able to control the shot type in a specific way is also important. There are different shot types commonly used in filmmaking, like a close-up, mid shot, full shot, wide shot. Those are key for a creator to convey their story in a certain pacing, in a certain rhythm, in a certain way.

    So having specific control over that is important. And also the specific control over the characters, body poses, facial expressions, even where their body is oriented, right? Like where the eyes are looking at. Those are all very important. So if I'm creating a dialogue video, the two people are sitting side by side, right? So the first person gotta look like that and second person gotta look like this.

    it we need to maintain it that way otherwise the emergent will get broken.

    Jiani (14:51)

    Oh my god, that's a lot of detailed technologies that has to go in there.

    Jiani (14:59)

    Can you share with us some interesting projects that people are using and creating on Artflow platform?

    Tim Zhang (15:06)

    Yeah. So firstly, to take a step back, the way that we see users is sort of defined in three different layers. The first layer is the existing professional creators. Those are movie makers. Those are professional animation creators, people like those. So they already have a, this is their job to do this. And they have the.

    know how, expertise, resources to pull it off. But the thing is that that's a relatively very small proportion of the entire population. It's less than 1% of the population. The second tier is what we call the aspiring creatives. Like those people, they really wanna create. They have a lot of stories to tell. They have a lot of imaginations to share, but they don't have the know-how, they don't have the resources.

    they cannot afford to hire actors and rent locations to do this. And therefore, their storytelling needs have been suppressed. So that's our target user group. We want to help them to really tell their own stories and be able to share their imaginations. So who are these specifically? So specifically, they are...

    Like for example, the book writers, novelists, traditionally they have been writing books in the textual manner, but all of a sudden with the advancement of the AI tools, they can not only write but also visualize, create all different kinds of illustrations for their book. Like for example, I want to visualize my protagonist of my story, and not only in the static format, but I also want them to say hi to my audience.

    Jiani (17:04)

    Hmm.

    Tim Zhang (17:04)

    let those protectness to introduce the book itself to the audience. So that's a brand new way of doing this. So the second major group is the tabletop RPG game masters and players. So those people, they traditionally gather together physically and they play this games like Dandrian Dragon. And those games are just...

    are very much heavily based on imagination. It's all guided by the game masters. But the thing is, all of them are happening in here, in the imagination. But if you can see, OK, this is how the NPC looked like. And the NPC not only looked like this, but also is introducing the mission to the players in the video format. So all of a sudden, the immersion

    Jiani (17:46)

    Mmm.

    Tim Zhang (18:02)

    is on a higher level, so people enjoy stuff like this. And the third group of users is those social media content creators. So we noticed a lot of people, for whatever reason, they do not want to show their face while they're creating, for example, YouTube videos or TikTok videos. And some of them are not.

    Jiani (18:07)

    Hmm.

    Tim Zhang (18:31)

    English speakers or their English probably not fluent or may have different accents. So they do want to create faceless videos or videos contain faces that's not them. So traditionally they couldn't do this, but now with Art Flow they can create whatever character they can think of and have this character narrate, explain whatever concept.

    sharing three tips for friendship, let me tell you the five different ways to meditate, all of these. So that's like a new way for them to express and at the time they don't need to worry about how they look like, how they sound, right? Because the visual, the audio can be synthesized by AI, driven by their intention.

    Jiani (19:08)

    Mmm.

    So it all kind of matters like the screenwriting, the kind of the story line, how well you write, the character. So it'll come back to like a creative writing event, like the better that you are at creative writing, the power of the visual power of the AI tool, they can have a pretty good story to tell.

    Tim Zhang (19:53)

    Yeah, and now the script writing part also can be assisted. So chat GPT is a great tool that people use a lot to really branch out their ideas and do different brainstorming. And once they have a solid script, they can take that and visualize on our flow.

    Jiani (20:15)

    Mm-hmm. So AI-assisted creative writing and AI-assisted movie making. Interesting. So another case, actually, I was kind of watching you share. There's like this kind of manager training where

    Tim Zhang (20:21)

    Definitely.

    Jiani (20:37)

    people are developing a kind of learning development scenario where there's like a challenge happening in the team, there's a frustration, and the manager has many different ways to respond to it, and you can kind of pick your own adventure sort of experience.

    Do you, can you tell us a little bit more about that project that you shared?

    Tim Zhang (21:00)

    Yeah, so some background about this is that we're trying to create a new product called the dialogue generator. So why do we want to create such a product? It's because we see a lot of people, including, like you said, people from the learning and development domain and educators. So all of these people, the

    they need a thing called scenario-based learn. And actually, that's how we human learn as well. We put ourselves into different scenarios intentionally or unintentionally. And then in that scenario, we observe, get the input, and we behave, we react. And then we get the feedback based on our reaction, and we sort of continue that way or correct in a certain way. So scenario-based learning is very crucial for people to learn.

    And the difficulty about this is that traditionally it's really hard to create those scenarios. So for our case, we have created a scenario where there's a conflict between two different co-workers in a professional setting. How to resolve that with a high EQ in mind. If we want to do this in a traditional way, we get to hire different actors.

    Jiani (22:06)

    Mm.

    actors.

    Tim Zhang (22:27)

    We're going to dress them up, put on the makeup, and set up the lighting, rent the location, set up the gears, and then start shooting. It's very costly. It's very time consuming. But with AI, we think anyone, like let's say this person is from the HR department. She had this great idea. She can just come to Artflow, define, OK, I need Alice to look like this. I need Bob to look like that.

    Jiani (22:38)

    Hmm.

    Tim Zhang (22:55)

    and Alice is sitting here, Bob is standing over there, and now they're talking these specific scripts. So I can use such a scenario to train my following employees to be onboarded to tell certain things or certain values. So lowering the barrier for more people to be able to do this is the key for this product.

    Jiani (23:21)

    Yeah, and I just as you were talking about it and like a lot of questions come Do you so Will all actors or actresses lose their jobs like Where do they stand in this

    Tim Zhang (23:35)

    Well.

    Yeah. So I think it's more like going to be in. So yeah, this is how I would see this. Firstly, the AI-generated content, it's not directly competing with the market for the professional content at the moment. So while people, just like TikTok, while people browsing on their phones to watch certain short.

    videos, they're still going to grab their family members and friends to go to the movie theaters during the weekend. So those two things are going in parallel. And once the AI is rising up and more people will be able to create and turn their imagination into the videos, it's going to be a new way to create and consume content. So it's not necessarily replacing any of the existing ones.

    But at the same time, it's probably also going to enhance some of the existing procedures and workflows. Like for example, if I'm an actor right now, in order to join you know different movie crews, I probably need to do I need to do you know tons of different mock acting, right? I need to dress myself up in different ways.

    and acting in a certain way, but with AI, you can all of a sudden do this remotely, you can sort of remote control how to do this, things can be more economical for you. So that could be also the benefits for this. And some actors, they are also screenwriters, right? They also wanna become directors. Right now it's not easy for any actor to pull a team together and direct a movie.

    Jiani (25:35)

    Hmm.

    Tim Zhang (25:35)

    But with AI, the actor itself can not only act, but also direct. So this is having themselves enabled as well.

    Jiani (25:41)

    Hmm

    So I can see, I'm just going through my mind, I can see the real actor acting and in their own directed movie and they have like, they have different characters made by AI and there's like this like dialogue or interaction between them. So it's gonna be fun to watch. That's great. So with the ability to, for example, going back to the example of the manager and the scenario based learning.

    Tim Zhang (25:59)

    Yep.

    Yep.

    Jiani (26:14)

    Do you think that could be an interesting way for learning development leaders to kind of leveraging the power of storytelling, the power of everyone can be the story maker to train people's ability to be more empathetic, to be able to take on other person's perspective and overall become better human being and better leader, better follower, better.

    team player and do you think it has the possibility to do that?

    Tim Zhang (26:48)

    Yeah, absolutely. I think AI is going to be the key enabler to really address the gap between people's prejudices and hopefully minimize the biases. So I think about this in this way. How do those prejudices come from? How do those biases come from? It's mostly because.

    there is an information gap, right? I see certain things and those certain things gets concrete and gets built in my mindset and I form that as a stereotype and next time I use the very set of stereotype to process future information. All of these is originally because, at the very beginning, because I didn't see the full picture, right? I see a partial picture and treat it that as the truth, as the fact. It was not the case. So...

    then the power of AI is that it will allow more people to join the effort of storytelling, the effort of conveying to really allowing more diversity of the content to be created. Right? Like for example, right now, for movies, majority of the movies are developed in the first world countries. Right? But for the, like for example, for

    for underrepresented groups, for example, from Africa. Their lives, their stories, their journeys, their adventures are not represented on the film screens as much as the Western world. But with AI, as we have already saw, like user behavior on our website, people from Tanzania, they're able to create their own stories.

    for gender equality, right? Gender inequality as a girl, right? I can portrait that as my very personal story and I can visualize that and I can share that on my social and get seen by others. As the content becomes more and more professional looking, more and more engaging, more and more people is gonna see that and more and more people is gonna see a wider spectrum of what's happening. Hopefully that's gonna...

    address the biases and prejudices.

    Jiani (29:17)

    Mm, leveraging stories as a way to draw people in and, and help them to kind of learn about the information that they've never exposed before and develop more understanding and the able to be more empathetic, it's just kind of a natural outcome. And people love stories. Why not? That's great. Um, okay, so I'm going to give you like a sharper question now.

    So the AI, so when we do a prompting and the AI will give us an image and sometimes the image is kind of like hit and miss sometimes it's like exactly on spot, right, exactly what I want. But sometimes like, ah, not, not so much. So, so how can creators or building development professionals, you know, when they're using the AI to create images, how can they?

    be more able to control the outcome or, yeah, better kind of align the outcome specifically to their expectations.

    Tim Zhang (30:24)

    Right. Yeah, this is a good question. So think about this process, right? Like, essentially, what AI is trying to do is that it tries to understand you via some sort of a media. And right now, it's text, right? You describe what you want in your mind via text, and then it's going to try to understand that text and try to visualize it in a certain way. So there are two factors influencing this. One is...

    when the user is describing what they're thinking of in text, is that accurate enough? If I'm just saying, a cat, I want a cat. AI is going to give a cat, but it's probably not the cat that the person is thinking about. But if the person is describing the cat in a very detailed way, that's going to help the AI to understand, oh, that is the specific cat you want me to generate. That's going to help. And another.

    aspect is that can we just use not only use text to describe it, how about some other means to do this right maybe you can in addition to the text you can probably also provide a reference image right I need a cat doing certain things but please make the pose of the cat just look that look like that image right so different

    references for AI as the input will help people get their output more accurately. So those are things that our users can control. But on another hand, what we can control, the tool builder can control, is that how AI understands the information that people provide. When AI sees that text,

    Jiani (32:11)

    Hmm.

    Tim Zhang (32:19)

    or when AI sees the text and the reference image, can AI fully understand this? How can we minimize the information loss in this process? So this is more technical. This is about getting more advanced algorithms, make sure it's trained properly, and stuff like this, to make sure from the both ends, we're all trying to push this more accurate understanding. But on a different note, sometimes

    Jiani (32:31)

    Mm.

    Mm.

    Tim Zhang (32:49)

    100% understanding may not be what people really want it. Because right now, one of the major use cases for AI is that it's a great brainstorming tool. I give it something, and it gives something back. And those things are just blowing my mind. I have never thought about those possibilities. But that makes sense when it's all this. So sometimes, improvisations.

    Jiani (32:54)

    Yes.

    Tim Zhang (33:19)

    It's a good thing about AI. Let it think out of the box.

    Jiani (33:25)

    Don't kind of put it, like, don't be too specific, unless you need to. If you're expecting some surprises, don't be too strict and leave plenty of rooms for AI to iterate and be creative. He's near Davis. Fine, so I've never thought about that. You know, we sometimes welcome that. So for folks who really want to explicitly control the AI images.

    Tim Zhang (33:29)

    Exactly.

    Yes.

    Jiani (33:51)

    Do you think prompt engineering is something that they need to learn or is this something is just temporary and ultimately you just need to be an overall communicator? What's your view on that?

    Tim Zhang (34:04)

    Yeah, I think at the moment it's still a thing that requires a bit of learning curve for people to grasp. Yes, I need to describe things in a certain way to make sure it looks in an optimal way. But ultimately, I think well, AI will develop such ability to understand people better and be able to ask follow up questions to further make uncertainties more concrete.

    And if we're talking about even further future, once we all have this cable plugged in our head with the human brain, a computer brain interface, that is probably going to be more convenient for a computer to understand the raw intention of people.

    Jiani (34:57)

    Yeah, that type of future. I have a mixed feeling about it. Sometimes I'm like, I wanted to like do so many things, think so many things at the same time. I wish there's like a chip in my brain. I can just like offload that. But sometimes like, Oh, am I a human now? Am I? So yeah, that, that just put, put me into like total different kind of thought of thinking, but, but I think that's definitely a one possibility because somebody is already working on it. So.

    We're on the, we can talk more on that later. Um, so then going back to kind of the, the AI thing, will it be possible in the future that there will be a fully AI directed, AI planned, AI executed AI movie that we're going to see on the big screen. Do you think that's possible?

    Tim Zhang (35:58)

    I think that is possible. Firstly, from the technological perspective, things are actually moving towards that direction. Right now, if you ask Chai GPT to write you a story in text matter, it's able to do this. We're not judging whether it's a good story or not, but it's able to do that. And actually with the visual capability, you can actually turn the story into like a...

    like a short comic book thing that's generated from chat.gpt and.e. It's also doable. So if we just extrapolate this into the future, we can see, okay, all of a sudden, you can do audio as well. We can do motions as well. And probably someday in the future, it's able to do a fully-blown movie just from scratch. I think that is possible. At the same time, I think...

    as a viewer, as a content consumer, we care more about the content. And sometimes who created that content and how that content is created is something more of a minor issue for us. For most of us, when we watch a movie, we probably wouldn't care that much, like how this movie was created in the first place and who were those creators. So whether those creators are human or AI,

    It's a minor thing for us. So in that case, the creator more acts like an agent. They're an agent to help people consume a nice content so they feel good, so they get inspired. And whether the agent is made of AI or human or a mixture, it's all possible.

    So that's from the perspective of a regular content consumer, right? But if I treat myself as a creator, right? I'm a creator, I'm an artist, I want to create. In this case, I wouldn't let AI to dictate this thing for me. At most, I want it to help me, but I'm still the one holding the driving wheel. In this case, AI won't replace humans.

    Human has innate need to express, to tell their imaginations. So I'm sure AI will just allow more people to become creators. At the same time, for consumers, they will also have more content to consume, because AI is also a productivity tool.

    Jiani (38:41)

    Hmm. So yeah, I've been waiting for some good movies to come out. So hopefully in the future, there will be some good ones. Great. And that's because we talk about the neural links and everything. So I wanted to like circle back on that one. So.

    What would be the content creation, the storymaking even beyond the time where AI is able to self-direct and create and execute a movie? What's even further beyond that?

    Tim Zhang (39:16)

    Yep.

    So the way I look at this is that we're all humans, right? We're sort of a life 2.0 according to the definition of MaxTagmark. So he defined life in three different buckets. One is life 1.0 that is like, for example, lion, cats, turtles, like they have their physical form pretty much fixed.

    Jiani (39:45)

    Squirrels.

    Tim Zhang (39:51)

    and they couldn't change it easily. And when it comes to their software, their mental state is mostly fixed as well. It's not like a mother turtle can teach children turtle a lot of lessons and get passed on that knowledge. It's not like that. So those are the life 1.0 and we human are life 2.0. That is our physical form are currently pretty much fixed at the moment, we couldn't change it much.

    but our software are drastically being updated. We go to school to update our brain, to update our knowledge. We talk to others, and we attend podcasts like this. We watch podcasts like this to get inspired. Whenever that inspiration happens, it's an update of our mindset. So our software has been constantly updating. But when it comes to the Life 3.0, it probably means both hardware and the software can all be updated.

    a future where we're heading to. So when it comes back to content, what is content? Content is a sort of information. It's like a software update package to really update people's mindset. For any single child, they were born with just the basic DNA information, which is about two gigabytes. And for them to grow,

    There are basically two ways. One is the experience stuff by themselves, right? Be in a relationship, do certain things, build a yacht and stuff like this. They learn while they're doing. And a different way to grow and learn is they learn based on experiences distilled from the previous generations or from others, right?

    So the ability to tell stories is more on the second case. I'm packaging my experience into this compact content form. We call that movie or we call that video story. And I ship it to you. You watch it. So you get inspired. You get moved. And your software gets updated. But it doesn't have to necessarily be in this way. Once we have the cable, once we have the chip,

    it doesn't have to be something that's visual. It can be on the electronic level directly. We just swap neural information and we probably help shaping the topology of the neural network of each other's brain in a certain way. This is all just based on pure speculations. But it might be possible. So in that case, watching a movie could be the old way.

    Jiani (42:40)

    Hmm.

    Tim Zhang (42:45)

    of updating the software.

    Jiani (42:48)

    Hmm, so it's like downloading a zip file and unpack in here.

    Tim Zhang (42:56)

    Yeah, let's say I really want to learn about Mexican culture. I don't want to go there, but can I just download this file? And all of a sudden, okay, I'm a Mexican already, right? Like I know everything about Mexican.

    Jiani (43:16)

    I think I would like that because nowadays there's just so many books that I wanted to read, but I don't have enough time. And I realized I caught myself actually thinking like, hey, Gianni, I wish there's something, a chip that I can just download or there's a software. There's just like the information about this book. Okay. I can just download it in my brain and I can just like in one minute. I've read it.

    plenty of inspired thought based on that.

    There's a need, I think. I think so.

    Tim Zhang (43:55)

    Yeah, we're all constrained by the bandwidth of our input sensors, like the eyes, nose, tongue, hands, and ear. But if we can expand that, what if I can see beyond the visible bandwidth of the spectrum? I can see ultralight, I can see different...

    bandwidth. That means I can all of a sudden perceive the whole world in a whole different way. And what if I can hear a larger range of hers? I can hear stuff that I cannot hear at the moment. And for things that are imaginary, what if we can just transmit things in a way faster speed? I think that that's going to be a fundamental shift to how human development

    and how we collaborate as a whole.

    Jiani (44:55)

    And we will probably not be advocating for inclusion and diversity because by nature we are inclusive and diverse.

    Tim Zhang (45:02)

    Yeah, so that would be interesting. Like, who will decide what information gets downloaded here? Maybe everybody needs to download a basic package that makes sure diversity is built in here. No prejudices, no biases, everything is just a fair starting point. Could be.

    Jiani (45:08)

    Hmm.

    Hmm, that's great. We're going far, far into the future and I love it. So let's pull that back and let's kind of, as we wrap up this podcast, let's tap into the magic part of it. So you mentioned a little bit at the beginning of this podcast and you're a very imaginative child.

    Tim Zhang (45:26)

    Hehehe

    Jiani (45:50)

    So can you kind of walk us back to when you were 12 or 11 years old? What did team really enjoy doing? What, you know, what would we find him doing? Having fun.

    Tim Zhang (46:02)

    Yeah, so I guess when I was young, around the age of 11 or 12, that was when I was still in elementary school, as grade five or six. Yeah, so during that time, it's a really vibrant brain, and it's a lot of wild imaginations, and I play a ton of video games.

    It's about thinking different things. I remember myself like once just squatting in the garden and imagine I'm the size of ant, right? I'm just that small and I travel through the garden and see those grass and rocks as a giant thing around me, how that would feel, right? What if I encountered a bee? What if I encounter another ant? What sort of...

    I had a story with happening that way. All of these are just crazy imaginations. So yeah, a lot of daydreaming at that time. But at the same... Yeah.

    Jiani (47:05)

    I love that too. I'm already starting to imagine like, you know, probably like a lady ant and dressing like maybe a flowery dress and we're going there to get some candies. That'll be fun.

    Tim Zhang (47:21)

    Yeah, exactly. So those wild dreams, and at the same time, we all watch television's Journey to the West. After watching Journey to the West, I had different thoughts how the plot could go in a different alternative ways. I would think about those. Maybe the Monkey King could have some sort of a special encounter with.

    with a different character, with the Transformers, which is also quite popular at that time. What if they went together to form a team and embarked a journey, how that would look like? So those, yeah, it's a lot of imagination at that time.

    Jiani (48:11)

    That's nice. And, um, with that in mind, did you have to kind of overcome any sort of like challenge early on, um, that helped shape who you are now or helped inspire what you do now?

    Tim Zhang (48:29)

    Yeah, I think overall, um...

    So if I want to recall at the moment, I would say probably a challenge would be how to be less lonely. I'm not sure how other kids feel, but when I was at that age, my parents were all pretty busy, right? They got to go to work, down to five sometimes, even longer. And other kids, sometimes they...

    They just have their own stuff to do, right? It's not like always you guys can play together. And for me, I spend a lot of time just being with myself. I really hoped there's more ways to connect with others and there's more ways to...

    just do different things, right? Create different adventures, right? Because a lot of things are forbidden. Yeah, yeah. So I do remember like there are a few times, like I was told by the parents, hey, you cannot go here and there, just gotta stay at your home, just do your thing. But I got nothing to do, right? It's really boring, it's just myself.

    Jiani (49:34)

    Be an adventurer.

    Studying.

    Tim Zhang (49:58)

    No, I didn't study if I wasn't pushed. Yeah, so yeah, I'm not sure whether it's a challenge, but it's definitely one emotion that I can recall at the moment. And I hoped that future generations could have more ways to spend with themselves, more ways to express themselves.

    Jiani (50:01)

    hahahaha

    Yeah.

    Yeah. And I think I resonate with you. I think long, long longliness is very common even nowadays. And it's interesting that the social media and the technology is so like the most advanced time ever in the history that we can remember. But the people are the like the longliest and

    And I think maybe, you know, having the ability to create their own stories and be able to go on their own adventures and maybe they can meet each other in the metaverse some somehow and form like a virtual connection. I think that could potentially help because physically travel is really is challenging. With the weather changing, particularly.

    So maybe it's all happening in our mind. Like we just stay at our home, but we don't feel lonely because we're going on our own adventure. We're meeting people remotely and sharing experiences. Definitely, I think maybe Artful can contribute to that future as well. Yeah. What do you think is your magic then as we wrap up this conversation?

    Tim Zhang (51:44)

    Yeah, we're gonna try.

    Yeah, I guess my magic or spark is that innate need to create, to bring things from zero to one. I think that's a very intrinsic thing to me. Like even if I'm not doing the startup, I'm doing something else. I want to do something that's novel. I wanted to check out a new...

    way of doing certain things or just inventing something that wasn't possible before. So creating, regardless if it's a story or it's an invention or something else, it's that major drive, I guess.

    Jiani (52:39)

    Hmm, that's beautiful. It's like you... Because for you, it's like to create is to be.

    Tim Zhang (52:50)

    Yeah, that's, I guess it's also something that's sort of a define, like having human defined as a unique species on the Earth, right? Like we have such a large brain developed and it's one of a unique attribute for human to have. And we just born with such curiosity to create new stuff, check out the unknown and

    and make the impossible happen. It's something sort of built in, I guess.

    Jiani (53:21)

    Hahaha

    It's our default setting. Some people remember it. Some people forget. That's great. So good to have you, Tim. And thank you so much for sharing your stories with us. And I've just learned so much in this short period of time and all the best, uh, to you and to Artflow and maybe in the future, let's come back on this podcast again and sharing your.

    recent development, exciting news. We're happy to hear that.

    Tim Zhang (54:02)

    Yeah, for sure. I'd be happy to share what we have created later on. And hopefully, we'll bring more perspectives and novelties to the platform. So yeah, thanks for having me, Jiani

    Jiani (54:14)

    Good. Thank you very much, Tim

 

Disclaimer

  • The content shared is to highlight the passion and wonder of our guests. It is not professional advice. Please read our evidence-based research to help you develop your unique understanding.

  • AI technologies have been utilized to assist in creating content derived from genuine conversations. All generated material undergoes thorough human review to ensure accuracy, relevance, and quality.

 
Previous
Previous

Therapeutic Gameplay: Harnessing the Power of Immersive & Inclusive Games

Next
Next

The Neuroscience of Wonder: How Awe Lights Up Your Brain