Someone Stole My AI Content. Is that Even Legal?
Episode Description
In this episode, AI influencer and consultant Leo Rodman discusses the future of AI and its impact on various industries. Rodman believes that AI is an unstoppable force that will affect nearly every profession, from teaching and farming to art and software development
Guest appearance
Transcript
00:00:00,101 --> 00:00:04,181
I'm Asaf Kas, and today joining me, Leo Rodman.
2
00:00:04,341 --> 00:00:06,461
He's an AI ethics powerhouse.
3
00:00:06,461 --> 00:00:10,981
He's got a master's in science, multiple certifications like PMP and CSM.
4
00:00:11,621 --> 00:00:14,621
He's got extensive experience in the educational technology.
5
00:00:14,621 --> 00:00:15,781
I'm going to talk about that.
6
00:00:16,901 --> 00:00:19,061
LMS administration, AI prototyping.
7
00:00:19,621 --> 00:00:25,541
But he's also an innovation lab mentor at 1871, working with a lot of emerging startups.
8
00:00:25,941 --> 00:00:32,341
And he is a member of the OpenAI Forum, collaborating on AI development safety and AGI.
9
00:00:34,101 --> 00:00:44,261
Leo's post on LinkedIn, you know, your post on LinkedIn, when you talked about AI and the copyright law and how that all works to really sort of
10
00:00:45,221 --> 00:00:46,501
started this conversation.
11
00:00:47,141 --> 00:00:55,061
I would love to learn more about what got you into becoming so vocal about where AI is taking us.
12
00:00:55,741 --> 00:00:57,461
And let's dive from there.
13
00:00:57,541 --> 00:00:57,781
Yeah.
14
00:00:57,781 --> 00:00:58,741
Thanks so much, Assaf.
15
00:00:58,741 --> 00:01:00,421
I really appreciate you having me on the show.
16
00:01:01,621 --> 00:01:02,821
And thanks for the introduction.
17
00:01:03,541 --> 00:01:13,301
So, yeah, I really think that AI is going to be the future, not just in technology, but in pretty much every field, every profession.
18
00:01:14,741 --> 00:01:26,181
including, you know, teaching, like you mentioned, but, you know, even things diverse like photography, farming, sculpture, everything is ultimately going to be involving AI.
19
00:01:26,901 --> 00:01:32,821
So, I think it's really important for us to stay on top of, you know, where it's coming from, where it's going.
20
00:01:33,381 --> 00:01:38,021
I think ethics, like you mentioned, are a really important consideration.
21
00:01:38,661 --> 00:01:41,381
So, like, is AI trained ethically?
22
00:01:41,941 --> 00:01:47,381
Are the people who produce the data that's being used to train AI being compensated fairly?
23
00:01:48,821 --> 00:02:01,781
And, you know, I think the most important is what's it going to do to the world economy as it potentially displaces a lot of professions and not necessarily replacing people, but, you know, displacing them.
24
00:02:01,781 --> 00:02:09,461
So those people have to then learn a new place to use their skills in this developing world economy is just so important.
25
00:02:11,221 --> 00:02:22,901
You know, like you mentioned about ethics and training, I think it's really important for us to think about, you know, where that data comes from that's used to train AI.
26
00:02:23,901 --> 00:02:30,661
Like, you know, OpenAI is constantly being sued by all these different authors and artists.
27
00:02:30,901 --> 00:02:37,541
Midjourney recently got sued as well regarding image generation and video generation.
28
00:02:39,141 --> 00:02:45,581
So, you know, there's all this in these lawsuits, I mean, beyond what's right and wrong.
29
00:02:46,581 --> 00:02:51,941
Yeah, I mean, I think what's right and wrong is one question and where the lawsuit will go is another question.
30
00:02:52,821 --> 00:03:03,781
I would say that, you know, in general, in the United States, I don't think these lawsuits are going anywhere just because, you know, creating all of these roadblocks to progress is too disruptive.
31
00:03:04,341 --> 00:03:09,780
especially right now where we have an administration that's very pro-business and anti-regulation.
32
00:03:10,580 --> 00:03:13,940
I think in the EU there may be more teeth where...
33
00:03:14,860 --> 00:03:18,020
You know, the EU is more willing to restrict things.
34
00:03:18,740 --> 00:03:22,300
But I think that, you know, restriction is ultimately counterproductive.
35
00:03:23,060 --> 00:03:25,620
It's not stopping people from doing things.
36
00:03:25,620 --> 00:03:28,260
It's just displacing them once again, right?
37
00:03:28,260 --> 00:03:34,380
Like people are just gonna go to a different country and run their AIs from there in a place that has less regulation.
38
00:03:35,380 --> 00:03:44,340
And, you know, I think that it's very important for the US to be, you know, pro-business, anti-regulation with regard to AI,
39
00:03:44,980 --> 00:03:48,820
Just because in particular, I think China is a really big competitor.
40
00:03:49,660 --> 00:03:55,860
And China is really going to have a very much hands-off approach to AI, in my opinion.
41
00:03:56,820 --> 00:04:00,980
I think that they're going to have restrictions about what you can say about the government.
42
00:04:01,940 --> 00:04:04,020
But other than that, they're going to, you know,
43
00:04:04,420 --> 00:04:08,820
not really enforce copyright because that's historically what they've done with technology.
44
00:04:09,620 --> 00:04:18,100
So I think that any country or group of countries like the EU that enforces strongly a lot of regulation and copyright with regard to AI,
45
00:04:19,460 --> 00:04:20,380
they're not going to stop AI.
46
00:04:20,380 --> 00:04:25,180
They're just going to cause AI to move to other countries and they're going to hurt themselves in the long term.
47
00:04:26,020 --> 00:04:32,500
So, you know, I think it's really important on the one hand to talk about the ethics and like how do we fairly compensate people?
48
00:04:32,500 --> 00:04:39,060
But on the other, I think it's important for every country, you know, to act in their best interests in the long term.
49
00:04:39,060 --> 00:04:44,500
And in the long term, their best interest is inviting innovation and not fighting innovation.
50
00:04:44,660 --> 00:04:45,780
Got it.
51
00:04:46,980 --> 00:05:01,460
I wanted to ask you, so you wrote about copyright and you gave some general guidelines around what you could do and you could sort of explicitly say that you may not reuse my material, et cetera.
52
00:05:03,460 --> 00:05:10,500
When I read that, because I play a lot with Base44 and Replayit and Cursor and all the tools that help,
53
00:05:11,140 --> 00:05:19,700
I immediately thought, well, hang on, so basically my code is also AI generated, so would that mean that any software that I wrote with AI,
54
00:05:20,500 --> 00:05:21,540
because it's now copied.
55
00:05:22,020 --> 00:05:35,700
And then what does this mean when you have a thing with Microsoft said in their last earning call that now a quarter of the code in production is AI generated.
56
00:05:36,180 --> 00:05:43,780
So if that's true, are we entering a world where we can just copy Excel and now it's yours and they can't do anything?
57
00:05:43,780 --> 00:05:44,980
I don't think that's going to happen.
58
00:05:45,700 --> 00:05:47,780
But that's where I was thinking.
59
00:05:49,380 --> 00:05:55,380
Say more about your process of learning this and what were you thinking as a creator as well.
60
00:05:56,660 --> 00:06:00,900
Yeah, so the first question is about copyrightability of code.
61
00:06:01,660 --> 00:06:06,060
And in general, technology is not really copyrightable, right?
62
00:06:06,060 --> 00:06:17,540
You can get a patent on a physical device, but you can't really get a patent or copyright on code, which is why we constantly see someone creates a product and then someone just makes a clone of it that's like the exact same thing.
63
00:06:18,540 --> 00:06:25,940
which, you know, the originator does not like, but at least in the US, from a copyright perspective, there's not a whole lot that can be done.
64
00:06:27,220 --> 00:06:42,180
So, you know, I think AI will just kind of accelerate that to some extent, where, you know, you can use something like Cursor that has a screen reader, or Claude also has a screen reader, which means that, you know, it can see your screen.
65
00:06:42,420 --> 00:06:46,580
You could just run an app and say, you know, click all the menus and then make the same app.
66
00:06:47,060 --> 00:06:47,380
Right?
67
00:06:47,740 --> 00:06:48,820
And the AI can do that.
68
00:06:48,820 --> 00:06:51,460
So what's there to stop someone from doing that?
69
00:06:52,340 --> 00:06:53,220
Nothing, really.
70
00:06:53,220 --> 00:07:06,020
I'd say, you know, the same forces that existed before to stop clones, which is just that if you have market penetration and a lot of users, that's, you know, the strongest thing in your favor, right?
71
00:07:06,020 --> 00:07:13,860
That, you know, someone could copyright Microsoft Excel, but Microsoft already has a million people using Excel.
72
00:07:15,500 --> 00:07:18,820
And so that's kind of their biggest defense against copycats.
73
00:07:19,540 --> 00:07:21,940
Same thing for other things that are being copied.
74
00:07:22,260 --> 00:07:23,900
Yeah, I think we could also do better than Excel.
75
00:07:23,900 --> 00:07:24,380
It's very slow.
76
00:07:24,380 --> 00:07:31,860
But, you know, the same thing, like there's Canva, for example, which has been very successful, and then everyone else is out there making Canva clones.
77
00:07:32,820 --> 00:07:41,860
But all those clones are slightly inferior, but there's really nothing stopping someone from copying Canva, adding a couple of game-changing features, and
78
00:07:42,100 --> 00:07:43,700
you know, attaining market dominance.
79
00:07:43,700 --> 00:08:00,540
So, you know, really, it's a meritocracy, I think, in software that if you create a product and don't continue to maintain it and upgrade it and stay on top, you know, it's open to anyone to jump in there, clone your product, make theirs a teeny bit better and steal all your users.
80
00:08:01,620 --> 00:08:10,500
So, you know, to take it a little bit to our world and say, really any business and any business is producing content.
81
00:08:11,580 --> 00:08:18,060
you know, be product content, code, marketing content, sometimes just publications.
82
00:08:19,820 --> 00:08:28,860
What can compliance teams really do to help at least slow this down?
83
00:08:30,260 --> 00:08:40,740
Yeah, so, you know, hypothetically, you know, if you're having human contribution to at least 51% of something,
84
00:08:41,460 --> 00:08:43,140
It can be copyrightable.
85
00:08:43,500 --> 00:08:45,140
It applies less to software.
86
00:08:45,140 --> 00:08:47,460
It applies more to like art or writing.
87
00:08:49,140 --> 00:08:54,660
You know, but anything that's created by AI, at least right now, is not copyrightable.
88
00:08:55,060 --> 00:09:00,420
So, you know, if you use Midjourney and you make an image or you use Google VO and you make a video,
89
00:09:00,900 --> 00:09:04,980
Unless you do a bunch of work on top of that, it's not really copyrightable.
90
00:09:05,380 --> 00:09:12,500
So, you know, people will do things like stick their name in the bottom corner of an image, which does not make it copyrightable.
91
00:09:13,140 --> 00:09:15,700
So I could just, you know, erase your name and steal it.
92
00:09:17,140 --> 00:09:23,940
If you make a video, again, unless you do like a bunch of editing and compositing, you're not really putting your stamp on it.
93
00:09:23,940 --> 00:09:28,860
That's always been the idea with copyright is you have to do at least 51% of the work.
94
00:09:29,780 --> 00:09:31,380
for it to be copyrightable.
95
00:09:32,020 --> 00:09:37,140
So, you know, within the software world, like I mentioned, there's really nothing stopping people from cloning your work.
96
00:09:37,140 --> 00:09:57,859
The best thing you can do is just stay on top, keep making new features, and make sure your version is the best, which may involve, you know, kind of reverse thefts that like if someone clones your product and adds some new features, you then go in and you copy their features and add new features on top of that, which ultimately I think is good for progress because it creates this kind of
97
00:09:58,499 --> 00:10:06,859
arms race for upgrades and making software better, where, you know, a company can't do what they used to do back in the day, right?
98
00:10:06,859 --> 00:10:13,179
Like Microsoft in 1995 or whatever made Microsoft Word, and then they didn't change anything for like five years.
99
00:10:14,379 --> 00:10:28,139
which, you know, that model is no longer sustainable just because it's so easy to clone software using AI nowadays, where like I said, you can just unleash an AI on software, say, go click everything, see what all functions do, and then copy it.
100
00:10:29,459 --> 00:10:30,739
So you really have to stay on top.
101
00:10:33,379 --> 00:10:36,499
I don't know, maybe even you do.
102
00:10:36,979 --> 00:10:42,019
There's a professional term for that cloning risk, IP theft, I think it's probably,
103
00:10:42,579 --> 00:10:47,299
But I mean theft is, if it's not really defined as IP, then there's no theft.
104
00:10:48,019 --> 00:10:50,259
But I mean, it's still feeling.
105
00:10:50,499 --> 00:10:53,539
As far as the government is concerned, it's not theft, right?
106
00:10:54,739 --> 00:10:54,899
Yeah.
107
00:10:55,059 --> 00:10:59,219
Like I would call it stealing, but if you can't sue someone for it, what does it matter?
108
00:11:00,499 --> 00:11:01,059
Exactly.
109
00:11:01,219 --> 00:11:14,739
So I think, you know, what do you think teams and companies should be doing when basically it's legal to take away from them what, you know, their hard work is produced?
110
00:11:17,139 --> 00:11:22,579
you know, what can they do other than working faster, releasing more features and stuff like that?
111
00:11:23,219 --> 00:11:30,419
Yeah, I mean, I think one thing that companies can do is they can put in the EULA end user licensing agreement.
112
00:11:31,299 --> 00:11:35,699
You know, if you work from the following companies, you're not allowed to use our software and list all their competitors.
113
00:11:36,899 --> 00:11:40,459
So that way it does give you potentially a way to sue people.
114
00:11:40,459 --> 00:11:41,699
And again, I'm not a lawyer, right?
115
00:11:41,779 --> 00:11:43,299
But that would be my thought.
116
00:11:44,259 --> 00:11:49,859
Same thing, you know, you can put in the end user licensing agreement that you're not allowed to reverse engineer.
117
00:11:50,739 --> 00:12:03,299
So, you know, that licensing agreement then gives you something on top of the weaker existing laws where people have hypothetically agreed going into using your product that they don't work for a competitor and they're not gonna try to reverse engineer.
118
00:12:04,339 --> 00:12:10,419
But other than that, I think, you know, like I said earlier, the main thing to do is to just stay on top of things and keep releasing features.
119
00:12:11,019 --> 00:12:16,579
And that does create a lot of pressure for development teams where I think, you know, you got to be constantly innovating.
120
00:12:18,019 --> 00:12:22,019
But, you know, that's the world we live in nowadays where everything is moving a million miles a minute.
121
00:12:23,259 --> 00:12:26,419
And there's really no opportunity to like rest and take a breath, right?
122
00:12:26,419 --> 00:12:35,219
You have to constantly have your dev team working on coming up with new features, new things to make sure that you stay competitive.
123
00:12:36,659 --> 00:12:39,779
Yeah, which obviously
124
00:12:40,179 --> 00:12:43,699
challenges the ethics of what we're doing a little bit.
125
00:12:44,979 --> 00:12:50,819
Yeah, the challenge is the ethics of human resource management as well, where you can't ever give your team a break.
126
00:12:51,619 --> 00:12:55,059
That it's like constantly crunch time, which is tough for engineers.
127
00:12:56,179 --> 00:13:00,579
Say more about that, because I saw you also vocal about that.
128
00:13:01,539 --> 00:13:03,179
So where do you see this trend is going?
129
00:13:04,099 --> 00:13:09,779
Obviously, do more with less is now the mantra for everyone.
130
00:13:12,219 --> 00:13:13,139
Where do you think we're heading?
131
00:13:14,099 --> 00:13:18,499
Yeah, you know, I think that AI is really a productivity multiplier.
132
00:13:19,219 --> 00:13:30,259
And I see it as not significantly different from the internet or Wi-Fi or computers or cell phones, just in terms of it gives people additional tools to work with.
133
00:13:31,699 --> 00:13:38,259
You know, when computers came out, they didn't like fire everyone, they just expect everyone to do more with less time.
134
00:13:39,299 --> 00:13:47,059
So I think we're going to see the same thing with software development and the software world or really with any career, right?
135
00:13:47,059 --> 00:13:49,219
You're going to be expected to do more with less.
136
00:13:50,259 --> 00:13:53,939
You know, I'm hopeful that it'll lead to shortened work weeks.
137
00:13:53,939 --> 00:14:08,259
Like a lot of people have talked about 30 or 35 hour work weeks where hopefully it can lead to paying people the same amount of money to work fewer hours with the assumption that they're going to be more productive during those hours.
138
00:14:09,859 --> 00:14:21,939
I think we'll head that way, but it also has the opportunity for business owners to aspire a ton of people and then keep people working the same hours, but have fewer humans working for their companies.
139
00:14:22,819 --> 00:14:25,539
That then brings up the issue of like, what do you do with all these people?
140
00:14:25,539 --> 00:14:27,619
We're gonna have all these unemployed people.
141
00:14:28,419 --> 00:14:32,739
So either you have to pay people the same amount of money to work fewer hours.
142
00:14:33,219 --> 00:14:42,899
Or we're going to end up with a ton of unemployed people, and we'll have to have some kind of universal benefit to keep those people employed so they can feed their families.
143
00:14:43,179 --> 00:14:45,539
And do you see that happening in a capitalistic environment?
144
00:14:46,579 --> 00:14:47,059
I don't know.
145
00:14:47,299 --> 00:14:51,139
I mean, I'm certainly hopeful that people will be generous.
146
00:14:51,299 --> 00:15:00,099
But we've definitely seen a lot of wealth consolidation in the past 20 years, where the wealthiest are becoming wealthier and the poorest are becoming poorer.
147
00:15:01,619 --> 00:15:03,939
So really, I'm just trying to remain hopeful.
148
00:15:05,139 --> 00:15:11,299
From a personal perspective, I'm trying to stay on top of AI so that I'll continue to be employable in 20 years.
149
00:15:11,299 --> 00:15:16,499
If AI is able to replace everything, all we'll have left will be AI managers.
150
00:15:17,459 --> 00:15:26,019
So it's important to make sure that you're thinking like, how will I remain useful in this AI future?
151
00:15:28,099 --> 00:15:31,139
But, you know, ultimately, I'm trying to remain hopeful.
152
00:15:31,859 --> 00:15:44,739
I think that governments will hopefully advocate for their citizens by, you know, trying to create some regulations that say things like, you know, you have to have a reduced work week, which will therefore create more room for jobs.
153
00:15:45,459 --> 00:15:46,579
But really, we'll have to see.
154
00:15:46,899 --> 00:15:49,299
I think that it'll be different in different countries.
155
00:15:50,179 --> 00:15:55,299
I think some countries will probably see a lot of people lose their jobs and a lot of suffering.
156
00:15:56,059 --> 00:15:59,619
I'm hopeful that the US, where I live, is not going to be one of those places.
157
00:16:00,099 --> 00:16:19,938
But I definitely do see, you know, foresee that in a lot of developing countries, particularly in Asia and Africa, I'm very concerned that, you know, things could go poorly for people, that, you know, there might be a lot of job loss, and that would be really unfortunate for the people losing those jobs.
158
00:16:21,458 --> 00:16:22,658
Yeah, definitely.
159
00:16:22,658 --> 00:16:25,218
I think another aspect is going to be very unfortunate.
160
00:16:25,218 --> 00:16:26,778
I think I'm curious.
161
00:16:26,778 --> 00:16:28,098
What are your thoughts around this?
162
00:16:28,898 --> 00:16:36,978
There's there's a lot more impact with AI for the lack of regulation on other countries.
163
00:16:37,298 --> 00:16:47,138
And what I mean by that is that you have, you know, for example, GDPR, for example, if you're going to be doing anything online in Europe, you have to abide by the GDPR.
164
00:16:47,458 --> 00:16:47,938
That's great.
165
00:16:47,938 --> 00:16:49,058
It's Europe protecting
166
00:16:49,698 --> 00:16:50,578
their own citizens.
167
00:16:51,698 --> 00:17:03,138
But with AI, if I'm using now this, you know, Deepseek or whatever other tool that's going to, like immediately all of your data goes into this black box.
168
00:17:03,138 --> 00:17:06,818
It's completely unregulated and unsupervised.
169
00:17:08,338 --> 00:17:10,178
And it's much easier.
170
00:17:10,458 --> 00:17:17,698
I don't assume that Deepseek complies with the GDPR or really anything.
171
00:17:18,098 --> 00:17:45,698
well they probably it definitely complies with Chinese government oversight even if it's not running in China like if you ask it to tell you about the PRC People's Republic of China it says like all these really nice things about it and it has like a lot of opinions that coincide with the Chinese government's party line so I think we're seeing that but yeah you know there's always the possibility that if you have a bunch of regulation like GDPR
172
00:17:46,338 --> 00:17:52,578
companies can just offshore, right, run their models in a different country and then say, like, screw you, Europe, I don't care.
173
00:17:54,578 --> 00:18:00,738
You know, I think that- Or they can say, well, as far as we're concerned, we've done, we're holding the data in the server.
174
00:18:00,738 --> 00:18:07,458
It's actually being processed some locally on our own, you know, self-hosted LLM.
175
00:18:08,058 --> 00:18:11,458
And, but we don't really know how it works.
176
00:18:11,778 --> 00:18:13,938
We don't really know what biases it might have.
177
00:18:15,858 --> 00:18:16,498
and stuff like that.
178
00:18:16,818 --> 00:18:22,898
What are your thoughts around deepfakes that we're seeing, the good and the bad?
179
00:18:23,938 --> 00:18:26,658
Yeah, I don't know if there's a ton of good.
180
00:18:27,778 --> 00:18:33,298
You know, it's definitely good if you're like a marketing executive and you want to have Scarlett Johansson in your advertisement.
181
00:18:33,378 --> 00:18:35,938
The real trend that's going on, that's pretty good.
182
00:18:38,178 --> 00:18:43,858
You know, but like really it just takes protections away from individual contributors.
183
00:18:44,498 --> 00:18:48,898
away from artists, away from actors.
184
00:18:50,818 --> 00:18:53,458
Honestly, I don't think there's anything that can really be done to stop it.
185
00:18:53,938 --> 00:19:04,178
So similar to how I feel about image generators using artists work to train or video generators using YouTube videos to train.
186
00:19:04,938 --> 00:19:10,938
I don't think my opinion really matters just because it's kind of an unstoppable force in the same way that AI is an unstoppable force.
187
00:19:10,978 --> 00:19:14,578
You can't stop AI, regardless of if you think it's ethical or unethical.
188
00:19:15,218 --> 00:19:16,258
AI is happening.
189
00:19:16,818 --> 00:19:18,418
Get on board or get out of the way.
190
00:19:20,098 --> 00:19:23,658
So people are saying, oh, no, Midjourney is going to take artists' jobs.
191
00:19:23,658 --> 00:19:28,418
It's like, well, if you're an artist, you can protest all day if you want, but I don't think that's going to do anything.
192
00:19:29,018 --> 00:19:33,058
You'd be better off finding some way to exist within the system.
193
00:19:33,538 --> 00:19:41,978
and find a new livelihood for yourself that utilizes the features of AI, which some artists are starting to do, right?
194
00:19:41,978 --> 00:19:48,738
They'll start with AI as a starting point for sketches and brainstorming or video.
195
00:19:49,058 --> 00:19:56,538
People are using it for like dream boarding to come up with prototypes, and then they make the final from that, which accelerates the process.
196
00:19:56,538 --> 00:19:58,818
I think those are the people who are going to be successful.
197
00:19:59,298 --> 00:20:05,138
I think the people who are out there holding up signs, protesting AI, they're just kind of shouting into the wind, right?
198
00:20:05,938 --> 00:20:11,818
They're like yelling at a tsunami and, you know, more power to them.
199
00:20:12,658 --> 00:20:18,018
You worked with LMS before and lending management systems and education.
200
00:20:18,618 --> 00:20:27,298
I mean, when I saw, and I was running an education business about 10 years ago, when I saw the deepfakes, I thought,
201
00:20:28,178 --> 00:20:44,098
that could be a very good application for teaching, especially when, you know, if you can actually create some sort of a character that can express empathy, it can make it easier for people to learn, you can educate.
202
00:20:44,498 --> 00:20:50,498
I mean, I've done all the corporate, you know, the
203
00:20:51,138 --> 00:20:58,898
plans that you have to go through, like sexual harassment, like I've gone through there, it's always the same process you have to take and then everyone tells each other the answers.
204
00:20:59,138 --> 00:21:02,818
Like you can actually make this a good plan, right?
205
00:21:02,818 --> 00:21:09,538
You can actually educate people well if the deep fake capability is amplifying that.
206
00:21:09,778 --> 00:21:16,818
Yeah, you know, I think that that's really one good use case among many, the idea being to have
207
00:21:17,298 --> 00:21:24,738
basically an individualized education for each user, where, you know, the AI could be responsive to people's needs.
208
00:21:25,538 --> 00:21:46,818
You know, I think we'll still need teachers, but I foresee teachers being something more like coaches, where teachers would go around and monitor individual users' progress with the AI, which we already have in a lot of learning management systems that are used in primary education.
209
00:21:47,338 --> 00:21:50,258
where the teacher receives feedback on how people are doing.
210
00:21:50,618 --> 00:21:57,618
And that allows the teachers to concentrate their resources on the people who really need that extra help and who aren't succeeding with the AI.
211
00:21:57,858 --> 00:22:04,378
Whereas some users might be learning everything they need to learn from the AI and they don't need the teacher's help, right?
212
00:22:04,378 --> 00:22:07,698
A lot of people are very self-directed learners and they actually learn better.
213
00:22:08,418 --> 00:22:15,778
I would say I'm probably one of them when just left to their own devices where they just say, here's Wikipedia, here's a couple textbooks, have at it.
214
00:22:16,618 --> 00:22:26,978
and those people will teach themselves versus there's other people who might need more guidance, more help from a human, and that kind of frees teachers up to do that.
215
00:22:27,298 --> 00:22:35,858
So I'm hopeful that, you know, class sizes will remain relatively consistent, that like we won't see teachers teaching a thousand students in a classroom.
216
00:22:36,418 --> 00:22:44,058
We'll still see them teaching 30 students and they'll just have, you know, their time and resources freed up to help those who need that extra help.
217
00:22:45,538 --> 00:22:51,297
I think that also just it'll give opportunities to make learning more fun, right?
218
00:22:51,297 --> 00:22:54,737
Like math could be presented in a lot of different contexts.
219
00:22:55,057 --> 00:23:04,657
So if someone's interested in music, right, it could present it to them as a musician and talk about how math is important for music.
220
00:23:04,977 --> 00:23:11,857
If someone's interested in cars, it could talk about how math is important for automotive engineering.
221
00:23:12,457 --> 00:23:16,737
and find a way to, you know, really reach a learner in the same way a teacher would.
222
00:23:17,697 --> 00:23:18,257
You know, it's fine.
223
00:23:18,257 --> 00:23:23,937
Like my thought process every time I hear like like the vision you just described seems amazing.
224
00:23:24,017 --> 00:23:29,457
I'm thinking like, oh my God, like I would love my kids to to being in the school that treats people like this.
225
00:23:30,137 --> 00:23:31,857
But then I'm thinking about like all the.
226
00:23:32,897 --> 00:23:40,337
The technical implications of such a thing, that means that every word that my kids will ever say.
227
00:23:41,457 --> 00:23:44,017
will be recorded somewhere, right?
228
00:23:44,937 --> 00:23:45,377
Probably.
229
00:23:45,377 --> 00:23:53,057
I don't want to know what I said when I was seven, you know, or 13, or when I was really mad at the teacher.
230
00:23:53,697 --> 00:23:55,137
All of that's gonna be recorded.
231
00:23:55,297 --> 00:23:58,897
Yeah, I mean, I think that's part of just the brave new world that we're going into.
232
00:23:59,937 --> 00:24:07,017
You know, you or I come from an era, we're probably about the same age, I'm 40, where, you know, things were not being recorded.
233
00:24:07,017 --> 00:24:09,137
And when you went to a party, no one
234
00:24:09,537 --> 00:24:13,537
had a cell phone with the capacity to record everything that happened.
235
00:24:13,537 --> 00:24:21,537
Whereas now you go to a party or a sporting event or any other social thing and there's like eight people with their cell phones out and probably everything is being recorded.
236
00:24:22,417 --> 00:24:25,857
I think that, you know, kids are just a lot more comfortable with that these days.
237
00:24:26,577 --> 00:24:31,377
People have, you know, accepted that that's kind of how the world works.
238
00:24:32,017 --> 00:24:36,337
I think that there's more education to be done about the implications of that, right?
239
00:24:36,337 --> 00:24:43,217
That, like, kids don't necessarily understand the bad parts of everything being recorded and how that could impact them in the future.
240
00:24:44,017 --> 00:24:44,977
But I also think that...
241
00:24:46,177 --> 00:24:50,577
Well, I think that we'll see more acceptance from society in the future where...
242
00:24:51,857 --> 00:24:55,057
You know, like if you said something dumb when you were 16, right?
243
00:24:55,137 --> 00:24:57,017
Nowadays, that could come back to haunt you.
244
00:24:57,537 --> 00:25:14,177
But I think that in 20 years, people will just, there'll be so much volume of, you know, the amount of dumb things people have said on their records that, you know, a certain percentage of dumb things, especially things that were said 20 years ago, will be acceptable, right?
245
00:25:14,817 --> 00:25:16,097
Like I think we saw with...
246
00:25:17,377 --> 00:25:20,417
Brett Kavanaugh, right, the Supreme Court justice in the U.S.
247
00:25:20,657 --> 00:25:24,977
Stuff that he did back when he was 16 really came back to bite him in the butt, right?
248
00:25:25,857 --> 00:25:27,057
That was very bad for him.
249
00:25:27,777 --> 00:25:27,897
Yeah.
250
00:25:27,897 --> 00:25:28,377
Whereas in the...
251
00:25:28,377 --> 00:25:32,497
People are being canceled for something they posted, you know, a decade ago.
252
00:25:33,617 --> 00:25:33,937
Right.
253
00:25:33,937 --> 00:25:40,097
So I think just, there'll be so much recording of everything that, it'll be more of a percentages thing.
254
00:25:40,097 --> 00:25:43,617
Like if only 1% of the things you said were offensive, you're in the clear.
255
00:25:43,617 --> 00:25:47,537
If 20% of the things you say are offensive, then you might be in trouble.
256
00:25:48,097 --> 00:25:50,897
But I think that, you know, a certain amount of.
257
00:25:51,017 --> 00:25:53,777
I don't know, I almost call it like chaff, right?
258
00:25:53,777 --> 00:25:57,457
That like little bits and pieces of things you said that you shouldn't have said.
259
00:25:57,457 --> 00:26:03,137
As long as that percentage is low, it'll be acceptable just because everything is going to be recorded, like you said.
260
00:26:03,857 --> 00:26:08,097
That that's just kind of the future we're going into where you're recorded at all times.
261
00:26:08,097 --> 00:26:14,017
There'll probably be cameras everywhere, like in London, all over the world recording everything you do in public.
262
00:26:14,657 --> 00:26:16,897
Nothing you do will really be private.
263
00:26:16,897 --> 00:26:18,737
I think that's kind of a way we're going.
264
00:26:19,857 --> 00:26:26,137
Everything you do like in school, like you said will be recorded and that's just kind of again the future.
265
00:26:26,137 --> 00:26:28,017
I don't think that there's a way around that.
266
00:26:28,017 --> 00:26:31,457
I think people just have to adapt to it.
267
00:26:32,617 --> 00:26:38,897
So maybe to wrap up, I would love to bring these to the GRC teams that
268
00:26:40,057 --> 00:26:40,417
that we see.
269
00:26:40,417 --> 00:26:51,937
So GRC is a world of policies and processes and a lot of the things that can definitely be automated and sometimes eliminated with AI.
270
00:26:54,097 --> 00:27:04,737
You know, the analogy from artists who have to now find how to, they're being displaced, they have to find a new place for them to be.
271
00:27:04,857 --> 00:27:05,777
When you look at
272
00:27:08,137 --> 00:27:20,097
an AI team or team member, what do you think are the steps they need to take in order to find a new place that they have?
273
00:27:20,737 --> 00:27:23,697
The biggest is just becoming really familiar with AI.
274
00:27:24,257 --> 00:27:29,137
So I really encourage everyone to go and buy a pro plan for Claude or ChatGPT.
275
00:27:29,617 --> 00:27:37,617
Personally, my daily driver is ChatGPT, but I also have subscriptions to Claude and Perplexity, which are two other engines I like.
276
00:27:38,817 --> 00:27:42,577
For me, it's more just to stay on top of things, but I think for a normal user, one is enough.
277
00:27:43,577 --> 00:27:49,177
And just spend 20 to 30 minutes a day minimum interacting with it and using it for different things.
278
00:27:50,337 --> 00:27:52,657
You know, there's a lot of different use cases.
279
00:27:52,817 --> 00:27:57,657
So, for example, I use ChatGPT and Claude at work, right?
280
00:27:57,657 --> 00:27:59,617
When I have a work problem, I input it.
281
00:27:59,897 --> 00:28:03,937
When I produce output at work, I run it through these tools.
282
00:28:04,897 --> 00:28:14,577
I do a lot of automation coding, and I'm starting to use cursor for that as well, where I say, like, look at my code and give me suggestions to improve, to learn to be better.
283
00:28:15,697 --> 00:28:19,057
You know, in the future, I think I'll trust the tools more and I'll say, just do it for me.
284
00:28:19,617 --> 00:28:25,337
But right now I tend to do it myself and have it check my work versus it doing it and I check its work.
285
00:28:26,017 --> 00:28:28,177
So I think we'll see an inversion of roles there.
286
00:28:30,257 --> 00:28:42,577
You know, I think that that will continue to be a big thing, real life human feedback, where, you know, people will have an AI do something and then say like, is this output good or bad?
287
00:28:42,777 --> 00:28:45,137
Both in terms of training the AI
288
00:28:45,937 --> 00:28:52,457
but also in terms of creating a role for themselves, like what are the people going to do is, you know, check the AI's work.
289
00:28:54,017 --> 00:28:59,057
You know, I think that humans, for now at least, are a lot more creative than AIs.
290
00:28:59,697 --> 00:29:05,057
So in things that involve creativity, you know, having a human in the director's seat can be helpful.
291
00:29:05,137 --> 00:29:08,737
So like VO for video or Midjourney for images.
292
00:29:09,137 --> 00:29:17,097
can do a lot of cool stuff, but it's really up to the human being that's prompting them, that's writing the inputs to come up with the creative stuff.
293
00:29:17,857 --> 00:29:30,816
So figuring out a way to fit those into your tool set, just the same as people figured out a way to use Photoshop for photography or Adobe Premiere Pro for video.
294
00:29:31,216 --> 00:29:36,416
You figure out ways to let the computer do what it's best at, and you do what you're best at as a human.
295
00:29:37,936 --> 00:29:50,816
But in particular, I think that, unlike with some of these other technologies where like there was a fixed inflection point Where, you know, it was created, you started using it and it is what it is AI is continuing to evolve.
296
00:29:50,816 --> 00:29:53,856
So that's why I recommend that kind of daily practice because
297
00:29:54,776 --> 00:30:01,736
What you find is a good use case for AI and a good place to slot yourself in today may change by tomorrow.
298
00:30:02,256 --> 00:30:12,896
Also, you have to continually sort of stay on top of that and think to yourself, like, how can I best use this technology to make sure my output as a human plus AI team
299
00:30:13,216 --> 00:30:14,416
is optimal, right?
300
00:30:14,416 --> 00:30:17,616
You're always trying to optimize that human plus AI output.
301
00:30:18,016 --> 00:30:28,256
And I don't know that we'll ever get to a point where an AI alone will beat human plus AI just because of the synergies and the ways that AI thinks so differently from humans.
302
00:30:28,856 --> 00:30:30,496
So, you know, really, it's just a matter.
303
00:30:31,336 --> 00:30:36,176
of thinking not in terms of what do I do, but how do I maximize output?
304
00:30:36,256 --> 00:30:42,656
How do I make sure that the output from this human plus robot team is the best it can be?
305
00:30:42,656 --> 00:30:46,096
And then how do I continue to adapt tomorrow to stay on top?
306
00:30:46,256 --> 00:30:47,016
So no other choice.
307
00:30:47,016 --> 00:30:52,416
We've got to dive in and learn the tools and adopt them to find our place, I guess.
308
00:30:52,736 --> 00:30:55,216
We don't know where the wave is taking us, but we want to be on top.
309
00:30:56,256 --> 00:31:01,776
Yeah, I think like if you look, no one would hire someone who said, I don't use telephones or the internet, right?
310
00:31:01,776 --> 00:31:02,896
You'd be like, that's ridiculous.
311
00:31:02,896 --> 00:31:03,456
Get out of here.
312
00:31:04,256 --> 00:31:06,096
So the same thing will apply to AI.
313
00:31:06,096 --> 00:31:08,936
If someone says like, I don't use AI, I don't believe in it.
314
00:31:08,936 --> 00:31:10,576
No one's going to hire that person, right?
315
00:31:10,576 --> 00:31:11,856
Because that's just the future.
316
00:31:12,336 --> 00:31:14,056
That person is stuck in the past.
317
00:31:15,056 --> 00:31:24,336
So I think it's up to individuals, individual contributors to continually refine and adapt and say like, how can I be the best I can be?
318
00:31:25,296 --> 00:31:31,496
you know, being really good at using AI is just the same as like being really fast working with Excel, right?
319
00:31:31,496 --> 00:31:39,136
An accountant who's really fast and capable with Excel and knows all the functions is more useful than an accountant who doesn't.
320
00:31:39,616 --> 00:31:48,256
So similarly, a worker who's really good with AI is going to be much more valuable to a company than a worker who doesn't really know how to use it.
321
00:31:48,256 --> 00:31:52,016
Like they use it, but they're being forced to and they don't really understand it and they don't like it.
322
00:31:52,896 --> 00:31:54,656
So some of it's going to be acceptance.
323
00:31:55,056 --> 00:31:56,496
Some of it's going to be training.
324
00:31:57,536 --> 00:32:01,656
But ultimately, I think it's just a tool or a skill just like anything else.
325
00:32:01,896 --> 00:32:10,976
And it's up to people to stay on top of it and try to maximize their utilization and their abilities with this tool moving into the future.
326
00:32:11,456 --> 00:32:12,376
Leo, thank you so much.
327
00:32:12,376 --> 00:32:15,616
We're out of time, but this has been fascinating.
328
00:32:15,616 --> 00:32:17,136
I've been really enjoying reading your stuff.
329
00:32:18,656 --> 00:32:19,776
Thank you, I appreciate that.
330
00:32:19,816 --> 00:32:24,416
Always a perspective I didn't think of, or, oh, right, there's creativity for kids.
331
00:32:24,496 --> 00:32:27,656
How do, okay, now I have to worry about that for a few minutes.
332
00:32:29,176 --> 00:32:30,096
So thank you so much.
333
00:32:31,456 --> 00:32:32,896
How could people contact you?
334
00:32:33,936 --> 00:32:37,056
Yeah, so you can visit my website, rodman.ai.
335
00:32:37,056 --> 00:32:43,056
You can also look me up on LinkedIn, and I'm also on the other platforms, Twitter, Instagram, TikTok.
336
00:32:44,416 --> 00:32:47,496
So, you know, check me out, look at my posts, shoot me a message.
337
00:32:47,496 --> 00:32:49,296
I'm always happy to talk to people.
338
00:32:50,456 --> 00:32:53,016
like yourself or really anyone else who's interested in AI?
339
00:32:53,016 --> 00:32:53,296
Great.
Want to join the chat?
We are always happy to chat with GRC thought leaders and market innovators. If you are one - let's talk!