The Future of Work: Becoming an AI Native

UC Berkeley Extension
30 min readOct 30, 2023

--

Jill Finlayson: Welcome to the Future of Work podcast with UC Berkeley Extension and the EDGE in Tech Initiative at the University of California, focused on expanding diversity and gender equity in tech. EDGE in Tech is part of CITRIS, the Center for IT Research in the Interest of Society and the Banatao Institute. The UC Berkeley Extension is the continuing education arm of the University of California at Berkeley.

This month, we’re taking another look at AI. With the advent of generative AI and large language models and chat bots, we need to look at the skills that you will need, the jobs that are impacted, and the opportunities to increase productivity, not only for yourself but for your organization as a whole. With AI evolving at lightning speed, are we quickly entering the age of adopt or perish? To take a deeper look at this, we have invited Chalenge Masekera to join us.

Chalenge is a data scientist currently working at Faros AI, a company dedicated to enabling enterprises to get invaluable insights into their engineering operations. Chalenge’s passion lies in harnessing the boundless potential of AI and data driven insights. He envisions a world where businesses and individuals alike can seize unparalleled opportunities from AI for success in our rapidly evolving world. He received his master’s in information management and systems from UC Berkeley’s School of Information. Go, Bears, and welcome, Chalenge.

Chalenge Masekera: Thank you very much, Jill. I’m excited to be here.

Jill Finlayson This is going to be a lot of fun, because this topic is changing and evolving so quickly that what we say today we’ll be saying something very different tomorrow. But let’s talk about the, quote, “kids today.”

So I was just reading this article about Gen Z natives will seem old fashioned next to the Gen A, the first AI-native generation. And then this came from a recent Fortune article. So if Gen Z got the iPhone and self-driving cars, what will it mean to be an AI-native generation?

Chalenge Masekera: That’s a hard question that I think we are likely going to step on bombs here, because the pace of change has been rapid, and I would want not to put any predictions on what’s going to happen. I think the most exciting thing about technology, at least since I’ve grown up, is how fast and evolving it’s always been. And being AI-native, as I said, my passion is in AI and how to ensure that people can get meaningful insights and be able to increase their productivity or enjoy life much more better. So it’s going to be a very interesting time.

Jill Finlayson: What do you think an AI-native person is going to think? Are they going to think differently than people who are just having to adopt this technology?

Chalenge Masekera: Definitely. The one thing that I’ve learnt since generative AI has blown up is to think like a leader. What is the most important resource that you have that everybody wants to have? And the answer is always time. What being AI-native means is being able to get that time back. There are many instances where you think of something, and being an AI-native is to ensure you’re able to think and use AI in every step of the way.

So what being AI-native means is to be able to use AI in pretty much every facet of your life. There’s this app that I recently discovered called Pie, and it’s grouped into very different specific tasks that I like doing, and it helps me with every specific thing that I want to do. For sample, I want to plan a trip. There’s a specific section for you that you can say, hey, can you plan a trip to go to Canada or Banff, and it will give you a detailed list.

It also has things of like, hey, I want to have a difficult conversation with somebody. Then you can easily plan and walk through with that. So being AI-native means to be able to utilize AI tools in every facet of your life.

Jill Finlayson: I love that. And you’re absolutely right — time is critical not only for the individual but for companies. How is time used efficiently? When you think about all of these uses, and you just gave a couple of examples with Pie, what is AI good at? What do we want to leverage that’s going to save us so much time?

Chalenge Masekera: I think at this point, and in the future, what AI has been good at and will continue to be good at in the short to medium term is the routine tasks, mundane tasks that are repetitive. For example, in my job I’m a software engineer machine, a learning engineer, and I do a lot of writing code. And there’s some things that I like writing and there are some things that I don’t like writing.

I have an for example a tool like GitHub Copilot, which helps me like auto-write like what are called unit tests. These are things that are sort of repetitive. They come with the same structure, and then all I need to do is change little facets of it. So it will all pretty much for the mundane repetitive tasks, that’s where in the short to medium term that AI is going to be super helpful.

Jill Finlayson: So we live in Silicon Valley, or your company is based in Silicon Valley. We’re in a bubble. We accept AI. We think that this is interesting. We’re thinking about daily uses for it, but how much are we seeing AI adoption around this country and around the world?

Chalenge Masekera: I think we are in a very interesting space. AI has been around for the longest time, pretty much more than 30 years. The uses of AI are varied. What we have is what I’ll call traditional AI and what sort of malware we are in the phase of generative AI, which is the hype cycle.

So AI has been used forever. Easy example is autopilot. We’ve used for fraud detection. We have used AI for doing climate analysis. That’s been there, and it has been out there in pretty much almost every country, every sphere of business.

The new wave, the new AI, the generative AI, is the one that’s hyped up and overblown, to be honest. And pretty much everybody and lots of businesses that I know are scrambling around trying to figure out how to use generative AI. And as far as I know, at least in our company and some of our customers, we are using AI, these new tools that are generative AI-based in pretty much every facet of our life.

Because of what they are good at I haven’t seen lots of adoption in many other places other than the US and the developed countries, and the developed countries are lagging in the use of the generative AI.

Jill Finlayson: And is that going to be a problem, where you think about how the internet started with a lot of specific countries that were creating the internet, and that led to some disproportionate focus on issues that might be US-specific? What are the risks that we face if AI is generated from a smaller subset versus having global input?

Chalenge Masekera: I think it suffers a very big risk in terms of increasing the technological divide, but we have to think of what are the things that go into training these AI models and what is needed to generate and build these AI models? First, we look at data. We need computing resources. And another problem, which frankly, is going to be interesting to solve is a lot of countries, there’s language diversity, and these new generative AI models do need a lot of data to be able to be useful and truthful and usable.

So in terms of how do we address those risks, we need to be able to figure out mechanisms of leveraging open source technologies. I know Facebook has their language learning model open source, and be able to take those and build other specific use cases. I’ll go back to the app was talking about.

It has very specific use cases. They take a large language model, then fine tune it for a very specific use case. So these are the examples that I think for the developing places or the industries that haven’t been catered for with AI be able to get into — so there’s going to be, I think, lots of collaboration that needs to happen, at least in the short term. But over time, I think, like any other technology, if we think of how expensive it was to buy a CD, store data in the cloud, all of these technologies, the costs are going to come down. And then I think the divide will become narrower and narrower.

Jill Finlayson: Eventually, we’re going to be able to democratize access to AI and to creating different AI tools.

Chalenge Masekera: Yeah, exactly. I think every language or every tool, initially the benefit for people building it is proprietary technologies and skills that they build. But over time, these essential things about it become more mundane and similar to almost every other model or every other tool, such that having them open source won’t be so much an issue for them.

Jill Finlayson: Yeah, I like to hear you mention open source because a lot of these AI tools are corporate-owned and operated, so Google or Amazon. Do we have any concerns about this being so controlled by the private sector.

Chalenge Masekera: Absolutely. I think so. It’s an issue that we should worry about. I’m happy that they signed this recently. I think nine companies, Google, Microsoft, and others, OpenAI, signed a safety pact, but it’s from a business perspective. I understand they’ve invested a lot for them to get to this model, but it’s very worrisome such that very few companies controlling the technology that’s revolutionizing the world.

Again, going back to the gap in terms of those who have and who don’t have, it’s just going to get wider because they are able to leverage the technology to make their businesses much better, to make their employees more productive and those who work in those industries would be much more effective and way outclass everybody else who doesn’t have access to these tools.

Jill Finlayson: It sounds to me a little bit like you’re trying to solve this problem, this gap. Tell me a little bit about Faros AI, and what is the company trying to do?

Chalenge Masekera: So Faros AI came from the idea that unlike other business functions, engineering was one of this sort of technology-forward sectors. However, we currently, as I’ve been engineering leader myself, we didn’t have visibility into what engineering operations are and what we’re doing and what we’re doing well, what are the metrics that we need to enable our employees to be more successful. And it’s difficult to benchmark the current state, track improvement, and measure impact.

So what Faros provides is giving the engineering business insights to support decision making, prioritization, resource allocation, and improve outcomes and operational efficiency. There’s a common saying in software engineering or tech that pretty much almost every project runs behind time, and this is where having that insight that’s actionable comes super useful.

So what Faros provides is a single pane of glass over all dev and ops data for consistent reporting.

Jill Finlayson: You’ve got data. What is the AI doing with the data to make it more actionable like you say?

Chalenge Masekera: First, the traditional AI was able to sift through data and give you insights into, oh, this is wrong; based on the data we had before, this is likely going to happen. What now we’re going to provide with the AI and generative AI is being able to recommend courses of action and be able to generate a reporting that sort of tracks specific business outcomes that you might be wanting to look at.

So we come from a stage of this is wrong and this might need attention to, oh, this is what you should be tracking, and this is some of the things that you should be able to do that will remedy the action.

Jill Finlayson: So your clients, how do they benefit from this? How does this visibility change their productivity?

Chalenge Masekera: Productivity mostly is like being able to do more. So our clients are using Faros AI to have this holistic view of their engineering metrics, be able to work through them, and be able to deliver more. What every engineering organization, actually every business cares about is being able to deliver more value to their customers.

And now, since almost every sufficiently large business is a tech company, the more features, the more things that you provide to your customer, the more valuable and more successful you are. And Faros helps your engineering function be able to succeed and deliver more value to your customers faster and more effectively.

Jill Finlayson: How did you end up starting an AI company? As you say, AI’s been around for 30 years, but when did you come into the use of artificial intelligence and how?

Chalenge Masekera: My story is kind of long winded, but I grew up in Zimbabwe. I did my undergraduate in Zimbabwe. Then my first job was business intelligence, which is full circle to pretty much what we’re doing today. And what business intelligence does is mostly providing sort of insights of what has happened.

But my interest got piqued into when I started reading about the traditional AI as in how can we be able to predict fraud? How can we predict if something that’s in the supermarket is going to run out? And that’s how I ended up coming to the US. I was trying to do this. I did it with some success, but I like actually did some educational benefit.

And once I realized the power of AI, of how we can utilize it for almost pretty much everything to make businesses better or even make lives of everybody better, we could use it in health care, I was just like this clicked to me as something that’s going to be here for the next foreseeable future and something that I would want to be part of.

Jill Finlayson: Can you remember your novice mindset when you first heard about AI? And what was your reaction, and how did you take those first steps to learning it?

Chalenge Masekera: My first story is called the stinky cheese. I was reading this article where there was a supermarket, and they had this specific stinky cheese that they had. Nobody knew why it was there, so they decided to take it out. They’re like, oh, there’s this specific stinky cheese. The sales are not that much, so why should we keep this on the shelf?

So they ended up removing it. A couple of months later, they see sales start falling. Then of course, they have to hire some consultants to come and look at what’s going on. And the consultants start digging into data and started doing their data analysis AI, which is called market basket analysis. And they were able to figure out most of the customers came into the shop, whilst they bought a lot of other things, they always had this specific stinky cheese and was one of the reasons why they came to the supermarket.

So once they removed it from the shelves, people started not coming to the supermarket. Well, they would come to the supermarket, see that the stinky cheese was not there, then they would get out and probably go look for it somewhere. So this story just blew my mind in terms of how you could come from being able to specifically narrow it with data.

And this is how I got interested in this. I was like, wow. What if we could be able to specifically figure out why a supermarket is not successful? What are the other use cases? I come from Zimbabwe, which is a very agricultural-based economy. What can we use with data? How can we predict droughts using data? My mom is a nurse. What can we use data about? If we have sufficiently large data, what can we do in terms of the health outcomes that can be super beneficial for the country or for even local communities?

I’ve always loved numbers, and I’ve always loved playing with data. And once I had these two parallel ideas, I want to be in business, but I also like numbers. How do I connect them to be able to do something meaningful, at least with my life and contribute to the world?

Jill Finlayson: People pretty much have heard about AI. I’m not going to imagine that somebody hearing about this for the first time, but they’re probably not using it in their job yet, for the most part, for the most people. So how do they think about getting skills? What types of things should they dip their toe into the water? Have they missed the boat? Has the boat already left? Is AI already sailing, and they missed it?

Chalenge Masekera: I wouldn’t say so. I would actually argue that everybody is using AI in one form or the other in their jobs. I think of if you’re writing emails, nowadays most of the email writers have autocorrect. That’s some version of AI that’s there.

So it’s something that’s already tangible and usable. As I said, it’s been around forever. You get on a flight, there’s autopilot. That’s AI. But in terms of specific use cases, what I encourage and what I’ve actually been forcing myself to do is always to think of it in couple steps. First, you want to figure out what your job is, what are the tasks that you do that can be automated? And then from there, you figure out what can I learn that’s out there? Because there’s a lot of things that are out there.

Learn what’s out there and how you can use it. Then focus on a pain point. What are the things that are super like annoying to you that you do? I think it would be like if you’re writing drafts. For me, I always have mental blocks. I use ChatGPT to — hey, you are an amazing content writer. I need to write a draft on an article that I want to publish.

And you can then insert little bits and pieces of AI into it. Like we usually joke about, it’s like if you were to have an intern, what are the sort of things that you would want to delegate to your intern? It’s always an experiment with an open mind, and be prepared to sometimes be shocked, also disappointed, but you should just keep on experimenting.

Jill Finlayson: I like that. You might be disappointed with the first results, but keep trying. So I heard a couple of things there. One is thinking about your job in a different way. Here are my overall objectives, but there are some tasks that are repeating, that are not using all of my creativity that could be used. I actually saw the other day. I think I was doing a post on either Facebook or X, and it said, do you want me to draft your post for you?

So it’s sneaking in, right? It’s sneaking into these different places. So I like that of identifying tasks. Figuring out what you would delegate to someone is a great way of thinking about it, and then having this mindset of experimentation. Is it intimidating to people to think about starting to use these tools, or is there a tool — like you mentioned Pie or others that you think are good starting point?

Chalenge Masekera: I think for now what has blown this hype cycle was ChatGPT. I think it is very intimidating. Even personally when I started using ChatGPT, I was like, oh, wow, this could take my job. Then started using it. I was like, oh, not really. It actually was amazing. It actually made me think of things in a different way and actually helped me a lot in terms how I function.

As said going back to our first conversation, what does it mean to be AI-native? A lot of portion of my time is being a software engineer, and there’s little things that you don’t remember or you don’t know what the problem is. Previously, I would have to go to Google, then scroll through hundreds of articles trying to figure out the specific article, but now once I have an issue, I just go to ChatGPT.

This is the problem that I’m having. I describe it. Then it’ll give you very succinct answers, which is a big change and a big boost to my productivity.

Jill Finlayson: If we think about learning to use AI, we talked a little bit about some roles and ways you can use it in your job. Are there ways that you can use AI for your hobbies or creativity?

Chalenge Masekera: Absolutely. As I said, I use AI in pretty much everything nowadays. I like cooking, and sometimes I just ask it, hey, ChatGPT, can you generate me a very creative recipe for making chicken stew? Normally, again, you’d have the people — there’s so many recipes on online and all of those like creative in each way, but then being able to ask ChatGPT to do that for you is very, very life changing.

I use ChatGPT and the tool I was talking about, Pie, to help me plan for even dates. I use it for planning trips to go home. And you can pretty much, as far as I’ve tried, ask it almost anything, and it will give you some version of an answer that is useful in one way or the other.

Jill Finlayson: I think there’s an opportunity here because you’re often afraid of making mistakes at work. But if you use AI for fun, you’re less intimidated and less likely to worry about mistakes. I like your example of the cooking and recipe and coming up with something interesting. What do you think people need to be wary of or how should they be thinking critically about the answers that ChatGPT or Bard gives them?

Chalenge Masekera: Yes, that’s a very important question and something that people should be wary. So the generative models are currently — they’re all what are called large language models. So essentially they are a language model, and what the language model does, it’s able to give you correct grammar, correct syntax, but it’s not factual.

Most of the time, it’ll give you a factual answer. But as far as they’re built, a lot of them, there’s questions about them hallucinating because all they need to do is being able to produce you a language that’s specific to what you’re asking. So whilst you can play around with them, I think this is where the collaboration comes in.

It’s not like you put everything in ChatGPT, you throw the info in and copy paste the result. That’s the why you and why I think jobs are not going to be lost, at least in the short to medium term. You as a specialized person can then say, this answer is correct; this answer is not correct. And then you can either prompt it further so that it gives you what you call it, a more factual answer or more answer that’s correct.

So you have to not always take what it says correctly as fact, but then actually be able to give you that great starting point so that you can actually use your skills to make your work or whatever thing that you’re trying to use with AI much more better.

Jill Finlayson: Yeah, think that’s really important that you question and then do your own research to verify. Use it as a starting point but not an end point necessarily. If we think about what AI is not good at, are there things that because of the hype cycle, everybody’s like, use it for everything. Are there things that we should not be using it for?

Chalenge Masekera: I wouldn’t use it to recommend like what medication I would use. I think it has great ideas, and I think a lot of them now actually restrict giving answers to that. So in terms of AI we’re far from what’s called artificial general intelligence, which is where AI has the same capabilities as human intelligence. We are very far from that.

So I wouldn’t use it for things that are mission critical, that you need at least to have the answers correctly and factual but you don’t have an opinion on it. I think you always have to be using AI at least in collaboration, and you have some level of understanding of what it’s going to give you.

Jill Finlayson: Let me throw some pairings up, and you can give me examples of good or bad uses. So if we say AI and medical, what comes to mind?

Chalenge Masekera: I would say mostly good, but this again with the caveat that you want your consulting, your health professional. I wouldn’t go and say throw away your health professionals and start using AI. So it’s pretty good, but not usable.

Jill Finlayson: Yeah, I think that there have been some inputs, as you point out, rather than the final say. So using AI to look at x-rays and surface problems, and then you still have a practitioner look at the actual results, but it can surface things that people might have missed otherwise. What about AI and education?

Chalenge Masekera: Very good. Again, because of the large data size we have, we have been pretty much able to build pretty decent models around that. And I think if you are in education, also depending on the level, if you are in college, you’re expected to have some levels of critical analysis and reasoning capability, and I think you can then, instead of reading all the books — well, you should, but it can help you narrow and summarize what you’re trying to get.

Jill Finlayson: If you were to advise a teacher how to address students using AI for their assignments, what advice would you give them?

Chalenge Masekera: That’s a hard one, actually. I think I would encourage them to use it. If I was a teacher, what I would advise my students — be able to first understand the basic concepts of what you’re learning. And once you grasp the basic levels of what you’re understanding, or if there specific things that as you’re studying you’re not getting, you could then use generative AI tools to say, hey, can you explain artificial intelligence to me like a 10-year-old? And from there you can build your own intelligence.

However, I wouldn’t say you should just, at least for now, go on any of the generative AI tools like ChatGPT and say learn everything from there and totally throw away all your exercises and textbooks.

Jill Finlayson: In our previous podcast, . We talked about collaborative intelligence, and we raised the question, is working with AI working with a tool or is it an actual collaboration partner? What’s your opinion?

Chalenge Masekera: think it’s a combination of both. For me, it’s a collaboration partner. As I said, sometimes when I’m writing the code that I use, sometimes it suggests things like maybe a paragraph or two paragraphs or 10, 20 lines of code. And in that way in software engineering, there’s a term called peer programming, where normally it would be two humans working on the same piece of code together, and one person will be either typing or you’re typing together.

So in that use case, I’ll think of that’s more collaboration. Then there’s also use cases where if you’re a salesperson, can you summarize the call? What are the next action item? In that sense, I think it will be a tool less as a collaboration partner.

Jill Finlayson: And we alluded to this earlier. Is my job going away?

Chalenge Masekera: I don’t think AI is at a point where they can be podcast interview hosts at this point. I think it’s a little too far out. We are getting there with being able to generate text and voiceover media. One of the podcasts that I listen to, they request questions from the audience, and they’ve built a chat bot which will mimic the person and trained it with knowledge this person has from their books and stuff. And now they’re hoping to use that if somebody asks a question, the host doesn’t have to actually answer the question, but the bot will actually respond to your request in the host’s voice and also based on the author’s knowledge.

Obviously, there’s going to be a lot of learning from it, but it’s a very interesting way of thinking about it. But overall, I think jobs being lost I think we are way, way overblowing what AI and generative AI at this point are. I would think of it like every other tool that has been invented by humanity. When we started having autopilot, did we not have more pilots? We did.

Yeah, we didn’t lose pilots. So it’s going to be — there are some jobs, how they are, they’re going to be replaced, or maybe modified in specific ways. Going back to what are the things that currently is good at — automating tasks that are currently repetitive. So there are some jobs that are going to be altered, changed, and some may be phased out.

But on the other hand, there’s going to be new jobs that are going to be created. But at the moment, to think that AI, until we have what I refer to artificial general intelligence, the numbers of jobs we’re going to lose, it’s way overblown.

Jill Finlayson: Yeah, it comes back to your point. This is more an enabling technology. So if you think about a person who does sales, AI can tell you about the person that you’re going to be having a sales call with. They can tell you about features that this person might be interested in, but at the end of the day, you have to have that conversation with the person and be able to adapt and pivot, but it can give you a higher starting point.

Chalenge Masekera: Yes, it will definitely improve where you start from, but in some of use cases, again, there are things that are going to change. Like chat assistants, to be honest, a lot of websites that you go now and you like have a problem and contact support, a lot of it is now AI. So those jobs change, and some of them actually got replaced.

So there’s going to be things that are altered and some that are probably replaced, but it’s not a case where every job or pretty much almost all jobs are going to be replaced. AI can’t weld. We’re not going to get there.

Jill Finlayson: So if we have some coders in the audience, and they might be worried about some of the coding being automated and done how can I be helpful in coding and enable them to be more efficient are there steps that they can take to play around with AI?

Chalenge Masekera: Yeah, the one thing that I’ve learned being a software engineer is there’s always more work, no matter how much you work. You write some new software, and you create bugs, and that’s more work for you. What I actually think is we’re going to have more software engineering jobs.

The nature of the software engineering jobs are going to change in some ways. For us how we use all this generative AI, one of the tools we use is called GitHub Copilot. It’s embedded in the notebook or IDE editor that I work on. So whilst I’m typing, it gets context of all the files and all the things that I’ve been working on, and it can suggest changes of things that I should make.

Sometimes if there’s a bug, it can explain why it’s a bug. The other things that I have to do, we do what are called peer reviews. If I write some code, you push it to the cloud, then somebody has to look at it to say, oh, this makes sense or help you figure out what are the problems. Either they have context or just like a different set of eyes.

It helps. It’s just like when you write an article or a comment. Somebody, a different pair of eyes needs to look at it and helps you suggest better thing. And some of these now tools we’re employing, if they are relatively small changes, we don’t have to wait for somebody to look at it and say, oh, this works and we should put it into production. An AI assistant can just say, oh, this works and automatically approves it and merges, and then you deliver a feature.

So instead of you having to wait a couple of hours for somebody to look at your code and say which problem might be a line, you’re going to be able now, again going back to tying it to business or societal impact, the more things you deliver, the more customer value you deliver.

Jill Finlayson: Yeah, and I think you make especially a point for coding that it helps you debug and probably helps prevent bugs from going in because it won’t make typos. It won’t have those kind of smaller errors that a human can make just in the course of writing the code.

Chalenge Masekera: Well, it does make a lot of errors.

Jill Finlayson: Oh, it does say? Say more.

Chalenge Masekera: At this point, it actually does make a lot of errors. Like when I first used Copilot, I think a year ago, it was, yeah, as close to noisy and not usable as you could think, but over the year, it’s become better. And it’s the trajectory that’s it’s going where it’s like every year the more data you use, the more you use it, and the more we suggest say, oh, this wasn’t correct, the better it becomes.

In some instances where you’re like, hey, can you describe the problem, and it will come up with a sample of a fully blown out section of the code that will fix the problem that you have.

Jill Finlayson: Yeah, I think it’s really interesting that coding, it can be wrong; it can give the wrong information; people might think it’s not worth trying, but it is improving over time, dramatically improving over time. So what you used a year ago is probably quite different than the GitHub Copilot that you’re using today.

Chalenge Masekera: Yeah, absolutely. Yeah, I think they have been rapidly iterating on it. And again, going back to my first point of how should you embrace AI, again, be prepared for disappointment, but keep experimenting. As long as we see the results we are seeing, people are going to keep improving these models.

Jill Finlayson: So you gave a couple other examples I wanted to go back to which is writing reviews or drafting letters, things that if you’re not a natural writer, it might be difficult for you. I remember the old quote, I think it was HG Wells, “no passion in the world is equal to the passion to alter someone else’s draft.”

And so if you can get AI to give you something and then you’re going in and editing rather than starting from a blank piece of paper, is that one of the ways that you found it really helpful?

Chalenge Masekera: Yes. Yeah, I can also talk about it. I’m not a native English speaker, and since the past four years, I’ve always installed Grammarly. My English is pretty good, but still because I’m not a native English speaker, there’s some things that I sometimes I think in my mother tongue, and then having this thing always right there next to me, I always type and say, oh, here, maybe if you rephrase it this, and it will give you a score. It’s like this is what the sentiment does, it’s very life changing or just makes me sound like a professional.

I think going back to how useful it is, there are times where somebody — you write a letter, for you it sounds like, oh, this was super joyful, but somebody reads it and it’s like, oh, no. Why are you aggressive? And so it’s like having that thing that kind of helps you.

Jill Finlayson: So let’s talk about where we’re going. We’ve seen it change just a lot in the past year or two. Where do you think we’re going with AI?

Chalenge Masekera: In terms of generative AI, I think this is the breakthrough technology of the past few years. We had been working on it for like a couple years. So there are lots of possibilities, I think. One I can think of what we were talking about — is AI in health care good or bad? I think we’re still right at the cusp of it.

And imagine we’re talking about diagnosing diseases. We could over time be able to diagnose if you’re in college, you’re looking at brain scans. We could use AI to predict that, and I think there was a study published a couple of years where an AI performed better than a lot of human specialists who had been trained with that.

So we’re going to incorporate a lot of these tools in health care. Another thing is you have your doctor. I have my doctor. And every time you have an illness, they have come up with a personalized treatment plan. It’s something that it takes time to figure out this is the specific thing that Jill has, and this is their history. Over time we should be able to use AI to build very, very personalized treatment models to people. And this will improve their health outcomes.

Another common use case is autonomous vehicles. I think over time we’re just going to — I like driving, but luckily I don’t commute. But if you were commuting every day and you have to drive, imagine the productivity gains where you just sit at the back of the car and the car drives you to your office or to your meeting. Or you have a long road trip, a lot of accidents happen mostly because of human error. If we have autonomous vehicles, which are, again, either through collaboration or fully autonomous, will help prevent a lot of issues.

So I think there’s lots of possibilities. I think, again, as the AI becomes better, we’re just going to chip in many any different directions.

Jill Finlayson: So for our listeners, as we think about this, whose job is it to introduce AI tools in their workplace?

Chalenge Masekera: Normally, you’d want to say it comes from the top, but I think if you as an individual, you are very focused and determined to succeed at your job, I think we have very specific use cases, and this is across a lot of the jobs, especially the office jobs, where you can use AI without needing overhand from the top. But also I think as business leaders, it’s imperative to see and look at what are the use cases that we could use AI for our business and how we can use it to improve how our workers work.

I was reading this article where somebody who was a consultant who was told to look into a business, the business was not running really well, and what they did, they went to ChatGPT, dumped all the data, the financial reports, and then prompted it to say, hey, can you give me what should we do to make this business a little bit more profitable? And they actually added a caveat — without having to reduce headcount. And it actually came up within minutes and like, oh, these are the steps that you do, and this is a very detailed plan.

The insight was not that it came up with super creative ways to fix the problem, but it’s gave you obvious things that somebody might have taken a week to — oh, these are the problems with our business unit. We are spending too much. That’s time, a week that you have saved by using AI, and as a business, you can then focus on thinking of other ways and more creative ways, think of how can we be more productive. What are the things that we can do to make our business function better?

Jill Finlayson: So now you’ve made me a little bit nervous. If we put that kind of company data into a ChatGPT, does ChatGPT keep that data? Do we have to worry about our information getting out?

Chalenge Masekera: That’s a good point. We definitely have to — I think for ChatGPT, you have to. They do use the data that you supply to train their models to become better. So it’s a worry again for us, as when we use AI we are very specific in terms of ensuring that we don’t give out — when we do the prompts, we don’t give specific business-related data. We don’t put customer information when we make those prompts.

So obviously, yes, there is risk, and there is use cases where all the AI tools use your data to become better. And you should be worried. But again, also it’s something that you have to be mindful of. Is it something that can be harmful to your business, or your organization, or yourself?

Jill Finlayson: So what are the three most important things that people can do right now to not get left behind and to be proactively leading the adoption of, as you say, ethical AI in their workplace?

Chalenge Masekera: Again, the first thing is figure out what are the, again, pain points, and figure out what’s out there that you can use for your job. That’s the first part. Then the second thing is start learning how to be able to use these AI tools more effectively and more efficiently so that after a few risk prompts you get the answer that you’re getting so you actually save a lot of time.

The third thing — you keep experimenting with it, and just keep playing around with it, and using it, integrating it in more and more functions. For our company, we use at least 15 AI tools in different functions, from sales, from engineering, from content generation. So there’s always a use case for AI in every function of your business or your life. Again, I use it sometimes to just recommend recipes. So there’s always something.

And sometimes maybe there’s not, and sometimes it’s just good to look back and say, OK, we’ve looked. There’s nothing that works for us, for your business that will contribute to the bottom line or top line of our business. Then you just sit back, and also but then keep monitoring to see if there’s anything that will come out, because sometimes there’s nothing, then there’s something that changes everything.

Jill Finlayson: So keep monitoring, be critical thinkers. Any final words for our folks that you want them to walk away with thinking about this topic of AI and the future of work?

Chalenge Masekera: Yeah, I absolutely would want people to approach AI with a super open mind. I think AI has been here for the longest time. It’s only getting better as we improve our computing power; we have more data. The use cases just going to get more and more. And the more you can use AI in your life, the better it’s going to be.

Again, what’s the most important resource that you have in your life? It’s time. And if you can find ways to give yourself time, again, if you want to have more time to walk around, and if you can be able to get a draft of an email you want to write in less than a minute when you normally take 30 minutes, then you spend the next 15 just editing it. That’s good for you.

I think just approach AI with an open mind and figure out ways that you can use AI to give yourself back time to do the more important things in your life.

Jill Finlayson: Thank you so much, Chalenge. Thank you for saving us time, and thank you for opening our mind to these different applications. Thank you so much for joining us.

Chalenge Masekera: My pleasure. It’s been great talking to you, Jill.

Jill Finlayson: And with that, I hope you all enjoyed this latest in our long series of podcasts that we’re sending your way every month. Please share with friends and colleagues who may be interested in taking this future work journey with us, and make sure to check out extension.berkeley.edu to find a variety of courses to help you thrive in this new working landscape.

And to see what’s coming up at EDGE in Tech, go ahead and visit edge.berkeley.edu. Thanks so much for listening, and I’ll be back next month to talk about mental health and its importance in our workplace. Until then.

--

--

UC Berkeley Extension
UC Berkeley Extension

Written by UC Berkeley Extension

UC Berkeley Extension is the continuing education branch of the University of California, Berkeley. We empower learners to meet educational and career goals.

No responses yet