The Future of Work: AI-Embrace With a Level of Skepticism, Part 1

UC Berkeley Extension
21 min readJun 26, 2023

Jill Finlayson: Welcome to the Future of Work podcast with UC Berkeley Extension and the EDGE in Tech Initiative at the University of California, focused on expanding diversity and gender equity in tech. EDGE in Tech is part of CITRIS, the Center for IT Research in the Interest of Society, and the Banatao Institute. UC Berkeley Extension is the continuing education arm of the University of California at Berkeley.

We’re taking a look at artificial intelligence and how it is changing the way we educate and the way we work. When we first started hearing about AI, there was a lot of conversation about automation, job displacement and upskilling. Then this year, ChatGPT, a generative AI chat bot, set the record for the fastest-growing user base with more than 100 million users as of February 2023.

AI is changing how we think about teaching, what we are teaching and how we assess learning. Governments are asking, how do we maximize the good that can come of artificial intelligence but minimize the bad? From a full embracing of technology to having a healthy level of skepticism, how will you adapt to the power of AI? To learn more, we turn to Ittai Shiu. Ittai is a instructor in UC Berkeley Extension’s entrepreneurship program, teaching marketing research concepts and techniques. He has a background in digital marketing and advertising technology with 20 years of experience working for interactive agencies and global brands.

He is the founder of LaunchPoint, a California-based nonprofit focused on creating paid professional learning opportunities for students from underrepresented communities. Welcome, Ittai.

Ittai Shiu: Hi, Jill. Thank you for having me.

Jill Finlayson: It’s wonderful to have you here, and I’m excited to dive into education. Where are you seeing AI enter the academics? And how did you become engaged in thinking about this topic?

Ittai Shiu: Well, I’m relatively new in education. I’ve just started teaching this year for UC Berkeley Extension, and I’ve seen—like many of my colleagues and counterparts in education—have seen AI becoming more and more capable with results ranging from amazing to eerie. And like my colleagues, I could see a creep into what students were turning in.

So as a part-time instructor, I teach one class, and it already feels pretty overwhelming. So for full-time educators, especially K through 12, I have an incredible amount of respect for the road ahead of them and what it means for their adapting their curriculum and how they engage with students.

Jill Finlayson: How did you see AI creeping into their work? What was your first giveaway?

Ittai Shiu: So I really made it a point to get to know my students. And I think this was an anomaly because, this year, I had the chance to see some of their writing before they discovered AI. So I could see an A/B result of before and after AI, and that clued me in, that, you know what? I really should be paying attention to this.

But on the side, I had been keeping track of AI, and how it’s been developing at light speed, and how it’s becoming this real force multiplier in terms of all industries and its ability to make anybody so much more productive. I started talking to some of my colleagues in education in all grades. I have a friend who’s a history teacher in eighth grade, and he was really frustrated at what this meant for what his students were turning in.

He ended up drawing a line in the sand and just made a parallel with plagiarism-same rules, same consequences, and pretty draconian consequences like repeat offenses cause for a failing grade. It was pretty harsh, considering what he’s accountable for teaching at that age. I thought it made sense.

But then he started to explain to me his process. Well, there’s an AI detector threshold. He put a certain percentage, red flags it, comparing it with a previous writing style, and a chance to have a conversation for deeper vetting. It was pretty controversial with students and parents, but it really kick-started his school’s AI policy.

And that got me thinking about, well, how are all these other academic institutions responding to AI? I’ve got a cousin who teaches 12th grade AP literature. The kids are older. They’ve got more ready access to technology, so there’s definitely no getting around it. And she’s having these really frank conversations with her students about what skills they need to develop to get into college and succeed. And she’s learning how to teach with it. And her school district, along with a couple of others, are starting to set up meetings to announce its policy or at least provide some guidelines on how to work with and talk about AI.

In my case, I’m over here at UC Berkeley. What are the colleges doing about it? I reached out to a colleague at the University of Minnesota. She has a Ph.D. in creative arts education, so she has very different challenges when trying to figure out how AI plays into her classroom. So she shared with me the University of Minnesota’s language around its AI policy and basically broke it into three tiers.

Well, here’s some language if you plan to fully embrace it. Here’s language if you will allow limited AI usage. And here’s language if you fully prohibit AI. And she explained to me how it correlated with her approach of how to figure out, you need to figure out your own philosophy, and design your learning experience that aligns with that philosophy, and then evaluate how it aligns or conflicts with the demands of the real world.

And so my take away from her philosophy and how that aligns with the University of Minnesota’s message was, hey, use your judgment. We trust you. This is moving really fast, but we’ll get through this together.

I don’t know what the right answer is, but I know what the two wrong answers are. And the first one is staying silent and putting it on the instructor to figure it out. And this is after students have a head start, and they’re learning how to manipulate AI and doing amazing and maybe terrible things with it.

The second is a zero tolerance across the board, and that’s different from that middle-school example. In that case, he was acknowledging what was going on and put defined policy in place. I’ve heard of schools banning AI or blocking it on school computers, and I think that’s a mistake. It puts the organization on the wrong side of the line.

One thing, it damages your credibility as an academic. It gives the faculty the green light to ignore this technology. I think it stigmatizes images of students, and it will most certainly backfire as AI becomes more commonplace. If you could imagine a school who had a policy against spell check, or Google, or the copy-and paste function. So in my mind, any response is good, and it’s additionally important that any organization’s response can’t be led by a lack of confidence in the technology, or a lack of human bandwidth, or a fear of academic dishonesty.

Jill Finlayson: I find this super interesting. So first of all, this idea that having a policy and having a response is really important. Zero tolerance and ignoring it, not the way to go, so we have to do something. And getting to clarity then around, what kind of support do instructors need?

This is penetrating their classrooms. So as you pointed out, is this the same as plagiarism? Or is this the same as Grammarly, where you can just check your grammar? Tell me where we fall in this.

Ittai Shiu: So it’s interesting. I’m not sure. I wasn’t sure. So I actually wanted to open it up to my students. I asked some of my students, do you feel that AI is equivalent to cheating?

Of course, most of them said, no. But it was actually pretty mixed. I think it was something like 60% lean towards no, 30% lean towards yes, and 10% led towards, hmm. They were undecided.

And I think one of the reasons for that is because it’s not been really well defined by academics, and there is no policy in place. And we’re really in this interesting time because this landed in our laps middle of the school year. It’s really difficult to say, OK, this is what it is. This is what we’re going to do about it in the middle of the school year.

To answer your question, is AI like plagiarism? It can be, but I think, under the right circumstances, that if the guidelines are there, then I think it can be a real tool to help students just be better students. Just so happens that I was teaching a market research class, could kill two birds with one stone.

I could introduce a real-world marketing research problem, and use fundamental techniques, and put it up on the Blackboard or the PowerPoint presentation. At the same time, I could gather data that I could analyze because, like I said, I didn’t have the context to understand what AI meant to a student. They see it as cheating. How did they use AI in their assignments?

The objective was to help educators like me understand the circumstances and the perceptions of a student’s usage of tools like ChatGPT in school assignments but then also give me something, give me some context that would demystify AI to folks like me and prevent that knee-jerk reaction of being scared or anxious about AI so that they could have the information and the context to develop a policy and a philosophy that would work for them. The majority of the students, 72% of them, they don’t associate AI with cheating or academic dishonesty.

I asked them, under what circumstances do you use AI? This is where this gets back to my point about it being a tool. The answers that came back were what we expected. Well, when I’m late. I use it when I’m late, or I’m bored, or I’m busy, or I need to increase the length of my paper. But by a significant margin, students use AI to get a better understanding of the topic, to help them get unstuck. And I’ve had a few conversations about how AI was very, very helpful with language learnings, helping ESL students really understand the finer points of the assignment.

The question about academic dishonesty is whether or not it counts as cheating. There’s a tried and true position on plagiarism. There’s a definition around plagiarism. There’s technology and a process.

So for years, tools like Turnitin, they’ve been one of the standards with enforcing that process. They’ve got plans to release some sort of AI detection plug-in, but that’s not ready yet. But it takes time to evaluate and figure out how to work with it in your class anyways, so there’s no silver bullet to how to get around this.

Jill Finlayson: I like the fact that you tie it to what is the objective for the use of the AI, and the fact that we can use it to help people better understand concepts and to get unstuck seems super valuable in terms of allowing people to use this tool to really further their education. So what kind of definition would you give it? If we have a clear definition of plagiarism, what would be the correct usage and definition for acceptable AI use in education?

Ittai Shiu: That is going to be a moving target, and that’s why the policy over at the University of Minnesota resonated because it went from fully adopting limited usage to fully prohibiting. It really is something that the instructor needs to define based off of their curriculum.

Jill Finlayson: I really do like the three-tiered approach to embrace it, limited use. I’m not as fond of the prohibit aspect of it, but there might be a right time and place, which brings me to the point of maybe it depends on the grade level. Maybe it depends on the course because I think it changes how we assess learning.

Is it really doing research? Are we losing the ability to do research because it’s doing the research for us? Or is it around more synthesis, and being able to pull insights and using this as the research tool?

Ittai Shiu: Yeah, that’s great points. I think it definitely depends on grade level. My college buddy, who’s a middle school teacher, and he’s very much against AI for good reason. He’s teaching fundamental writing skills. It’s kind of like giving my son a calculator when he was doing Kumon. It’s not going to achieve its educational goal.

However, at the upper levels, especially at college, we do expect our students to be able to have all of these fundamentals. So if they have tools that allow them to save time, do research, pull together interesting concepts, and tie it all together through their experience and through their intelligence, then I think that’s a direction that we should go. You’d mention something about how we grade, and I think that’s something that I’ve needed to look at.

And mind you, I teach one class. I teach college. I have the luxury of being adaptable. I can change things at the end of the semester. I think I’m scrappy enough to change things midsemester.

When it comes to making these changes for K through 12, these teachers who are beholden to a curriculum stretches an entire year that’s been taught for several years, I think it’s so much more challenging. So a lot of respect to the path that those teachers have ahead of them. But if a student’s only accountable for, let’s say, a single deliverable at the end of the assignment, a big paper, it becomes easier to heavy up on all those AI shortcuts.

And then it also reduces the student’s opportunities to build on those critical thinking skills. I think it’s important to place value on the process as much as or more than that final deliverable. I started doing this in my class, where I broke down the grade of a project into multiple pieces, a part that had them showcase the thought process behind their objective, their hypothesis, their brainstorming, their outline, as well as being able to articulate their methodology around their analysis.

A lot of those components I thought should also be done in class. Combined with a recap, a summary, QA, peer reviews, I thought were also very helpful in bringing the critical thinking process into the classroom. So it really put a spotlight on the work, and it forced the students to be more invested in the present and away from a place where they could tap into a ChatGPT window.

Jill Finlayson: I really like the focus on the process and valuing the process, in particular having people explain their thought process. I love the peer review. Doing it in class, this is really interesting. So I did a lot of speech and debate when I was in high school, and you had to do a lot of thinking on your feet. You had to be able to answer questions and debate things. So does this mean that we need to assess things more-as you’re saying by the components, does it mean we need to assess by having people kind of defend their thesis, have a interview, ask questions to see if they really understand the material as opposed to just regurgitating material that they see?

Ittai Shiu: Yeah, in a perfect world, where instructors have the bandwidth to do that for every class, yeah, absolutely. And where AI forces instructors to put this amount of attention to each student’s personalized experience, there is a silver lining that-there are tools that instructors can use to help out so that to save them time to give them more time in the day that they can allocate towards this type of personal instruction.

The balance and where that falls into place, it’s too early for me to say. And actually, one of the reasons why I’m even confident enough to come on to this podcast is that it’s so early in the stage of AI that I’ll listen to other AI podcasts with leading experts that have done these amazing things. And they have challenges being able to articulate or foresee, what is the future of AI? And what does that mean for human to AI interaction in all things, in addition to academics?

Jill Finlayson: I think your empathy is correctly placed for teachers as well. They don’t have unlimited time, unlimited bandwidth to, A, create these guidelines but, B, to change radically how they’re doing everything in the classroom. And to be honest, even the big testing sites are going to be facing these challenges. I believe the OpenAI chief said that it can pass the bar exam for lawyers and is capable of getting a 5 on AP tests.

Ittai Shiu: I would say that this generation that we’re talking about, Gen Z, they’re very realistic about AI. They’ve been described as very individualistic. And this plays out in social media and the emergence of all these niche interests that they embrace. I’ve read-and actually, I have budding Gen Z-ers in my household right now-and all things that box them in, from social constructs to racial equality, being an individual is important, especially in this loud, anxious, obnoxious, hyperconnected world.

And actually, this is something that came out in my research when I asked, what would you not use AI in an assignment? And there were a number of selections, was a multichoice that you could select all that applied as well you could write in your own. Some of the responses, the popular responses were expected-citing sources, or making connections with source material, editing.

But 60% of them said making it look like my work, and 40% said, I want to be able to do the work myself. So despite the initial visceral reaction that a lot of teachers will have about this generation leaning on AI to make life easier, I think that my research does-there’s a light at the end of the tunnel, that this generation does want to be able to put their own stamp and their own individuality on their deliverable.

Jill Finlayson: It’s really interesting. I just was speaking with one of the students, a college-level student that I’m working with, and he was talking about being able to work with AI is actually a requirement. It’s an expectation and that he’s been going to interviews, and they want to know, how are you going to 3x your productivity using these tools? How are you going to cut out what is more or less menial labor, maybe menial coding, and really focus on using the tools to debug, using the tools to actually deliver at a much higher level?

So from a college student perspective, they’re seeing this as quite an obligation. And that’s across not just coding, but ChatGPT can create UI and UX and all of these different things. So why would you take 10 times the amount of hours to work on something if you could use this tool to do it more effectively?

Ittai Shiu: Yeah, I absolutely agree with that. It’s going to take that innovation and that scrappiness to be able to really understand, and experiment, and really get the most out of AI. For folks like us, who have a job, and we’ve got responsibilities, and we might be a little bit more set in our ways, this has been the story for all new technology.

The difference with AI is its potential has implications that can benefit all facets of life, and business, and academics, and world problems. And a lot of that inspiration is going to come from innovative usage, and that’s going to come from the scrappy energetic who are playing around with this every day.

Jill Finlayson: Yeah, you think about this generation. They, of course, grew up Googling, and that’s Googling for answers. You’re looking for information. Now you can ask ChatGPT, and it’s kind of next-level Googling. It’s drafting summaries. It’s collecting a bunch of information and distilling this.

So as we think about using this tool, some of the concerns that I have are that it doesn’t have transparency. Where are those sources? It doesn’t provide citations. So how do we foster both that critical thinking that you were talking about but the fact checking?

Ittai Shiu: Yeah, I think citing sources is a great way to keep an assignment grounded in the human world. So you’re right. You cannot cite a ChatGPT response as a source, and ChatGPT won’t provide a specific source of something that a statement that it just made, although it will give you examples of relevant sources.

So at the college level, I think all sources should be properly cited. If a student makes a statement, it should be cited, even if that statement was inspired by ChatGPT. I think a good use of a secondary source, it both strengthens and specifically ties into the point that a student is trying to make. And it’s the articulation of that connection that exercises the student’s critical thinking and writing skills.

And that connection can be some other type of anchor as well, something else that ties a relevant point to something in and out of the classroom back to the assignment another way to keep academics grounded in the human world, tying it to current events. AI’s access to information lags by a few months. Life experience-most of us don’t have our own Wikipedia page. Plus any assignment that incorporates some relevant personal experience is more interesting to read anyway.

And then also classroom discussion. I love it when a student references something that was done in class. That connection is a nuance that an AI is not going to be able to figure out.

Jill Finlayson: Yeah, I think it’s really important to challenge students to validate with real-world experience. One of the things that we do with a lot of startups is, of course, customer discovery. And that means you have to get out of the building, and you have to go talk to actual customers. You have to interview.

And I think building in those types of activities, where you’re doing primary research and you’re validating. In a way, ChatGPT is giving you some hypotheses, but how do you validate those hypotheses? Because I don’t know if people have heard this, but ChatGPT can actually hallucinate or make up answers. So not everything that you’re reading is even founded in reality.

So these are obviously concerning facts as well, but it also does build the need for people to do that secondary and primary research. So what are some of the most brilliant ways you’ve seen students use ChatGPT or ChatGPT for, or Bard? Which, by the way, Bard apparently pulls in real-time information, so getting to that news and current events part.

Ittai Shiu: Yeah, yeah, it’s a moving target, but I can’t keep a list of, these are my these are my most favorite ways students are using ChatGPT. I’ve seen a couple of great examples. I think you had a colleague that used ChatGPT to debug a business project of hers?

Jill Finlayson: Yes, this was the case where a colleague, who’s not a programmer by trade, created a ballet shoe, and put sensors in it, and then wanted the censors to change the music, depending on the movement of the shoe. And she got fairly well along, but she got stuck. And ChatGPT was able to not only debug it but was able to explain to her why her code wasn’t working, so she learned how to write better code as well.

Ittai Shiu: Perfect example. You’ve got an entrepreneur with a great idea and a vision, and the barrier was the coding. And what I think AI is going to allow any creator to do is focus on what they love and just remove the barriers, whether the barriers be busy work, tons of research or technical QA. AI is going to allow creators to focus on their vision by providing that creative inspiration or a way to explore and test out a concept. Anyone will be able to more easily realize that vision, which is really exciting.

Jill Finlayson: That’s something that I get excited about. How do we bring more humanities people into technology? How do we help them bring to life their ideas if they don’t have the technical skills? And it seems to me that something like ChatGPT 4 could almost be a great equalizer in terms of giving people who have brilliant ideas but maybe not the technical skills to carry it out. Now they can just give a very clear prompt to the ChatGPT and end up with something that they can then import to create a website or to 3D print an object.

Ittai Shiu: Yeah, absolutely, and equality is actually one of the reasons that all my paths seem to be converging, ending up on your podcast. It’s the motivation that got me to start my nonprofit LaunchPoint. It was the notion that not everyone starts off on equal footing upon graduation.

So we can talk about how AI can be a professional equalizer later, but as far as being an academic equalizer-I mentioned in my research 65% of my students responded or of the respondents in my survey responded, they’d use AI to get a better understanding of the topic. And one of the challenges of kids wanting to learn is a shortage of people to ask questions to and provide feedback.

Take an example from home. My kids ask me questions about math. I got them. Well, to a point. And if they ask me about coding, my answers are very limited, and my kids just keep asking me questions about anything. I just run out of gas, period.

AI’s got this potential to help out students and teachers, and I don’t think it’s fully realized yet because all the hype around ChatGPT and all the ways AI can be used to shortcut the academic journey. But I just saw the TED Talk from Sal Khan. He’s the founder of the Khan Academy, and you talked about the potential of AI and education.

And the title was very eye catching. It was like, how can AI help not destroy the learning process? And he talked about his AI tool that’s in beta. I think it’s called Khanmigo, and it’s basically a personalized AI tutor. And it’s programmed not to give you the answers, of course, but it’s also programmed to identify where a student is struggling, and assess their process, and adjust the response so that it brings them closer and closer to the answer.

As well it’s a pretty amazing teaching assistant that saves them time. So they’ll have more human hours to spend with the student. So scaling of tools like this will equalize access to what’s going to be cutting-edge teaching technology to any student in any community.

Jill Finlayson: I really like that idea of personalized education. Back in the day, one of my first jobs was at the Learning Company, which was educational software, and it was based on the premise of, how do we help students learn at their own pace and build their self esteem? And I do see AI as having that potential and also being student centered, student driven.

What are you passionate about? What do you want to learn more about? And I love the fact that my tutoring skills only go so far, and they end here. And being able to now access these tutors who can answer these questions and not only answer them but here’s how you do it. Or here’s the next thing you could do.

And it’s teaching kids to ask better questions, which is really interesting. There’s a whole startup industry showing up around developing effective ChatGPT prompts, like how do you ask a good question? And to be honest, this is one of the most fundamental skills that, if we can have the next generation be effective at asking better questions, that sort of critical thinking and thinking down the impacts and unintended consequences, how might this work out, it might encourage that kind of problem solving, and collaboration, and critical thinking that we want to see our students have.

Ittai Shiu: Yeah, absolutely. And in these forums about how you can get the most out of ChatGPT, it trains that. It talks about, what are the types of questions that you can ask so you can get the answers and the results that you want? And I think, in a kind of Wild Wild West environment that ChatGPT is in right now, I think that’s a prelude to some really, really amazing learning tools.

And to your point, teaching students how to think critically and ask the right questions, in order to ask the right questions, you really, really need to understand what you’re trying to get at, what’s your end result, what’s your objective, what’s your result. And to be able to talk about any topic at that level already shows a level of critical thinking that surpasses me when I had to slog through a paper I didn’t want to write.

Stay optimistic and to try and think bigger. Focusing on how AI can be used to cut corners on an assignment, it is short sighted, although tempting in the short term. So think bigger about how it can help you empower you, your family, your kids, your business to do bigger things, to solve bigger problems and can all agree that this is moving faster than anything that we’ve seen.

So stay informed, and stay educated on capabilities, AI’s capabilities. But also have fun with the possibilities. Do your part to educate others so that AI is something that pushes for positive change. And I’m talking about parents to kids but then also kids to parents, teachers to students but also students to teachers. It goes both ways because I think there’s something to be learned in both directions.

Jill Finlaysin: So your kind of recap here-think bigger. Have fun. And educate others along the way.

That’s a good place to pause our journey with AI. We’re excited to have you back next month to continue this conversation, focusing on AI and its impact on the workplace. So stay tuned for more on this important and evolving topic.

In the meantime, please share with friends and colleagues who may be interested in taking this Future of Work journey with us. And make sure to check out to find a variety of courses to help you thrive in this new working landscape. And to see what’s coming up at EDGE in Tech, go ahead and visit

Thanks so much for listening, and I’ll be back next month to continue our AI conversation. Until next time.



UC Berkeley Extension

UC Berkeley Extension is the continuing education branch of the University of California, Berkeley. We empower learners to meet educational and career goals.