OpenAI tools and the future of programmers
Transcript:
All
right.
The
the
Evernote
recording
is
now
live.
I'm
happy
to
send
it
to
anyone
if
they
like
in
the
future
and
I'll
figure
out
a
way
to
Put
it
on
the
wiki
at
some
point.
I
don't
know
Yeah,
okay,
I'm
sorry
to
interrupt
you
Mache(sp).
What
were
you
saying?
Let's
go
to
PJ.
Oh
my
well
my
point
was
gonna
be
that
for
me
the
all
of
this
AI
stuff
is
Is
doing
what
temp,
you
know,
like
Archetypes
were
doing
before
it's
giving
me
the
template
right
and
the
template
is
just
getting
better
and
better
and
better
so
now
my
job
is
about
editing
testing
You
know
doing
the
real
value
add
for
my
business
rather
than
writing
all
this
boilerplate
crap
The
you
know
although
I
enjoy
boilerplate
crap
It's
because
I'm
a
weird
person
right,
so
yeah,
you
just
said
testing
it
seems
to
me
that
that's
a
realm
that
AI
could
really
excel
in
Because
you
know
from
the
beginning
we've
always
seen
testing
as
like
this
mathematical
proof,
right?
You
know
feed
it
a
and
you
should
get
B
back
and
We
all
know
I
mean
you've
written
many
books
on
this
we've
been
talking
about
testing
for
years
And
it's
still
a
topic
of
conversation
to
get
developers
to
write
tests
like
so
I
think
there's
an
opportunity
there
for
like
AI
just
Throw
AI
at
this
and
that's
a
problem.
We're
talking
about
designing
tests.
So
what
I
mean
all
of
it
He's
just
saying
you
know
Take
all
testing
You
know
like
why
doesn't
I
mean
it's
doing
it's
the
it's
the
task
that
I
at
least
you
know,
it's
like
hate
Yeah,
we're
still
talking
about
it
today,
right?
I
mean
it's
still
like
an
asking
developers
right
test
I
mean
you
were
talking
about
like
being
around
TV
people
I
mean
like
I'm
like
15
plus
years
old
like
that
we've
been
talking
about
this
right?
It's
still
like
what
so
yeah
There
you
go.
Sorry
such
bad
news
for
all
of
you
I
am
gonna
burst
your
bubble
because
it
ultimately
testing
is
about
getting
feedback
About
the
extent
to
which
your
actions
met
your
intentions
And
at
the
very
lowest
levels
that's
unit
tests
and
at
the
much
higher
levels
It's
did
we
make
revenue,
but
it's
always
do
a
thing
see
what
happens
and
in
the
world
of
AI
Especially
where
we
are
today
like
I
love
your
perfect
world
But
we're
not
there
yet
and
right
now
chat
GPT
and
their
ill
are
far
more
on
the
end
of
it
is
plausibly
Correct,
then
it
is
accurate
and
that
means
that
the
work
shifts
to
testing
We
got
a
thing
didn't
meet
our
intention.
So
I
have
such
bad
news
Now
I
also
have
a
good
news
because
it
turns
out
that
if
you
don't
think
testing
is
fun,
you're
doing
it
wrong
I've
been
hearing
that
for
20
years
I'm
still
talking
about
the
same
thing
So
the
question
I
have
is
you
brought
up
capitalism
earlier,
which
No,
it's
we
can
talk
about
whatever
we
want
but
Masha
you
were
talking
earlier
about
with
the
example
of
Circuit
boards
right
that
there
can
be
done
better
than
a
human
can
more
efficiently
and
probably
at
some
point
cheaper
Although
I
don't
know
what
let's
assume
enterprise
scale
AI
looks
like
right
now
So
because
we
do
live
in
capitalism
the
big
concern
I
would
think
for
people
who
design
circuit
boards
is
Why
would
the
company
pay
you
if
they
can
just
have
this
AI
built
and
then
you're
out
of
the
picture
Yeah,
I
think
especially
when
we
get
to
this
level
two
that
you're
talking
about
where
Eventually
the
AI
is
strong
enough
that
you
don't
even
need
to
program
All
you
need
to
be
able
to
do
is
ask
it
a
question
and
then
see
if
it's
right
with
a
test.
So
Yeah,
I
mean,
what
do
you
guys
think
is
that
like
every
layer,
you
know
technology
advances
through
like
layers
of
abstraction.
I
think
it
was
kind
of
You
know
saying
something
along
these
lines
as
well
that
our
lives
are
going
to
get
better
as
a
result
of
this,
right?
But
you
know,
we
all
use
compilers
underneath
all
the
code
that
we
write.
How
many
of
us
work
on
compilers
today?
You
know,
how
many
of
us
know
someone
who
works
on
compilers
today?
You
know,
how
many
people
are
there
that
we
work
on
compilers
today,
right?
You
know
like
back
when
I
started
programming
Yeah,
I
knew
people
who
worked
on
compilers,
you
know
Well
everyone
has
to
take
compilers.
I
don't
know
whether
that's
still
the
case
You
know,
I
think
like
if
you
don't
take
a
compiler
course
and
you
get
a
CS
degree
What
kind
of
Mickey
Mouse
degree
did
you
get?
You
know
But
yeah,
you
know
writing
a
low-level
compiler
is
just
kind
of
not
something
You
know
that
a
programmer
today
And
they're
would
ever
use
in
their
day-to-day
life
are
like
99
point
some
percent
programmers
You
know,
so
I
mean
I
think
we're
just
going
you
know
select
the
next
level
of
abstraction
makes
the
layer
below
kind
of
a
niche
at
that
point.
There
are
still
people
who
write
compilers,
and
they're
great
at
it,
and
they're
experts,
but
we
just
never
think
about
them.
And
there
will
be
people
who
will
write
this
software
just
like
the
AI
software.
Someone
is
going
to
write
Torch
or
whatever
framework
it's
going
to
be
running
on.
We're
just
not
going
to
think
about
them.
They're
going
to
be
off
somewhere
and
it's
just
magically
going
to
be
produced.
Because
it's
harder
and
harder
to
predict
the
future.
So
that
charge-adp
is
exactly
as
IBM
360
that
your
father
worked
on
50
years
ago.
So
we
cannot
even
think
what's
going
to
happen
in
50
years.
I
mean,
charge-adp
will
be
ancient,
archaic
thing.
So
we
have
no
way
to
imagine
what's
going
to
happen
in
50
years.
You
guys
probably
read
that
book,
Singularity.
So
lots
of
thoughts
there.
But
we
have
no
idea
how
is
this,
because
time
is
accelerating.
So
inventions
are
accelerating.
We
have
no
idea
what's
going
to
happen,
maybe
not
even
in
20
years.
So
I'd
like
to
continue
Elizabeth's
train
of
thought.
So
I'm
part
of
a
geek
cabal.
We've
been
testing
charge-adp
and
it
is
really
horrible
at
facts.
If
you
talk
to
it
about
history,
it's
very
Trumpian.
It
will
sound
very,
very
convincing,
but
it's
just
wrong.
Oh,
you
don't
mean
embracing
the
political
philosophy.
It's
just
the
biases
that
probably
were
in
the
data
that
it
was
trained
on.
In
the
facts,
it's
a
probabilistic.
So
basically,
yeah,
if
you
talk
about
historical
events
and
you
say,
no,
but
really
this
isn't,
that's
the
fact.
It's
like,
oh,
I'm
so
sorry.
And
it
will
continue
to,
it
will
take
your
input,
put
it
into
the
buffer
or
whatever,
right?
And
in
your
session,
it
will
get
a
little
bit
more
accurate,
because
now
you've
told
it
and
it
will
keep
going.
And
also,
it's
really
bad
at
math,
like
it
doesn't
understand
math
at
all.
And
there
was
an
interesting
article
by
Stephen
Wolfram
on
this
topic
where
he
was
saying,
chagivity
has
some
good
points,
and
then
what
you
should
really
do
is
marry
it
with
the
alpha,
or
the
alpha
Wolfram.
Yeah,
alpha.
And
because
we
actually
have
a
lot
of
friends.
No,
the
guy
who
came
up
with
Wolfram,
who
thinks
you
should
use
it,
is
like,
crazy.
Right,
but
that's
a
good
point.
The
point
is,
if
you
are
right
now,
if
you
don't
understand
how
it
works
and
you
submit
your
paper
somewhere,
you
can
be
very
embarrassed.
So
just
to
award
a
warning,
it
is
very
creative,
very
convincing,
but
really
bad
at
facts
and
doesn't
know
anything
about
math.
Well,
I
think
it's
really
good
for
creating
a
framework
of
your
paper.
And
then,
remember,
David,
you
always
have
a
chance
to
press
the
button,
make
another
one.
Right?
And
then
it
may
just
vary
dramatically
from
version
A
to
B,
right,
to
C
and
D
and
F.
Right?
It's
just
like,
I
tried
it.
I
wasn't
happy
with
the
results
that
were
presented
to
me.
Then
I
pressed
it
again.
And
it
actually
came
up
with
a
different
set
of
variables
and
facts
that
spin
it
out
at
the
end
of
the
session.
So
I
think
Andy
made
a
really
good
point
about
early
searching.
And
I
actually
was
just
talking
to
someone
about
this,
maybe
last
night
even,
that
when
I
was
a
kid
and
learning
how
to
search,
my
stepmom,
Leanne,
most
of
you
met
during
check-in,
was
trying
to
think
of
a
song.
She
couldn't
remember
what
it
was
called
because
she
didn't
really
know
at
the
time
how
to
use
search
effectively.
And
I
was
able
to
find
it
in
five
seconds.
And
that
was
magic
to
her
and
to
a
lot
of
my
less
technical
friends.
So
I
think
you're
right
that
the
skill
is
going
to
be
how
to
ask
the
specific
AI
to
get
the
output
that
you
want.
But
speaking
of
chat
GPT,
I
know
you
said
it's
not
good
with
facts
and
it's
not
good
with
dates
or
math
or
whatever.
But
if
you're
in
an
organization,
and
let's
say
instead
of
having
a
team
of
software
engineers,
I
have
two
software
engineers
that
then
manage
this
AI,
that
AI
will
specifically
be
trained
on
doing
what
the
team
used
to
do.
So
you're
not
going
to
have
those
issues.
Yeah,
you
probably
can't
replace
your
engineering
teams
with
chat
GPT
today.
But
in
five
years,
if
you
as
engineers
are
told,
hey,
build
this
AI
and
then
you
won't
have
a
job,
I
know
a
couple
of
people
here
are
optimistic.
But
is
anyone
worried
about
that?
Because
I
would
be
if
my
current
job
was
technical.
You're
not
worried
at
all.
I
think
one
thing
that
I
think
about,
you
mentioned
people
who
write
compilers.
I
would
not
be
surprised
there
were
more
people
writing
compilers
today
than
there
was
50
years
ago
just
because
they...
industry
has
grown
so
much,
it's
going
to
be
a
smaller
percentage
of
the
total
value
that's
created
by
the
software
ecosystem.
So
I
do
think
that,
like,
I
have
a
friend
who's
in
a
different
session
and
they
say,
like,
they
think
that
writing
more
code
using
robots
just
means
there's
going
to
be
a
lot
more
code,
which
means
there's
going
to
be
a
lot
more
bugs
that
we
can
define,
which
means
that
there's
going
to
be,
there's
still
going
to
be
demand
for
humans
within
that
system.
You
just
basically
said
that
the
humans
are
going
to
be
doing
the
shit
jobs,
right?
We're
going
to
give
the
fun
jobs
to
the
robots
and
then
we're
going
to
have
to
clean
up
the
frigging
toilets
and
the,
you
know,
truck
stops.
No,
finding
the
bugs
and
fixing
the
bugs.
That
is
literally
cleaning
out
the
toilet
and
the
truck
stops.
Well,
it's
finding
the
thing
that
needs
to
happen.
That's
fun.
No,
no,
no,
but
I
think,
one
second,
let's
look
at
another
one.
What
if
we
all
need
to
get
really
good
at
specification,
right?
And
actually
asking
the
system
exactly
what
it
is
that
we
want,
right?
Maybe
that's
the
job.
And
to
me,
that's
how
I'm,
like,
to
me,
specification
has
a
lot
to
do
with
being
good
at
abstract
thinking
and,
you
know,
it
has
a
lot
to
do
with,
like,
searching
the
space
of
possibilities,
you
know,
edge
conditions
and
all
these
things,
well,
these
things
that
some
testers
do.
And
to
me,
that's
actually
fun.
Like,
I
moved
from
being
a,
you
know,
software
developer
into
being
a
tester
and
I'm
having
fun
and
there's
some
interesting
tools
coming
into
our
field
and,
no,
I
think
it's
all
good.
If
you're
not
having
fun
testing,
you're
doing
it
wrong.
That's
the
rumor.
If
testing
is
checking
that
the
software
satisfies
the
intent,
the
other
big
challenge
is
specifying
the
intent.
Exactly.
And
knowing
the
intent.
I
mean,
that's
fun.
Hello.
We've
been
talking
about
Agile
for
20
years
and
what
is
that
about?
We
don't
know
what
we
want.
If
it
was
so
easy
to
specify
intent,
we
wouldn't
be
talking
about
Agile
because
it
would
have
been
specified
and
the
specs
for
the
software
would
be
perfect
and
we
know
exactly
what
to
go
off
and
build,
right?
But
that
problem
is
inherently,
you
know,
what,
NP
complete
or
whatever,
NP
hard
problem,
right,
specifying
intent.
So
I
don't
think
that
Agile
says
don't
specify.
I
think
it
says
don't
put
the
phone
all
the
front.
I'm
just
saying
that
Agile
deals
with
the
challenges
of
being
able
to
specify
intent.
And
if
it
was
so
easy
to
do,
we
wouldn't
need
Agile.
So
maybe
that
just
means
we
have
jobs
for
the
foreseeable
future
because,
yeah,
our
bet
is
that,
well,
maybe
it's
both.
If
we've
been
struggling
for
50
years
to
figure
out
how
to
specify
intent
correctly,
so
either
that's
just
a
hard
problem.
Or
computers
will
figure
it
out
very
quickly.
Well,
I
mean,
I
think
if
PJ
said,
you
know,
move
up
high
value
add,
right,
you
know,
I
don't
need
to
write
the
boilerplate
code.
I
can
move
to
higher
value
add
stuff,
you
know,
and
I
mean,
yeah,
we've
always
been
moving
in
that
direction.
But
a
lot
of,
you
know,
like
for
me,
the
code
that
I
write,
you
know,
it's
like,
yeah,
I
kind
of,
I
have
an
idea
of
what
I
want,
you
know,
I
have
an
idea
of
what
I
wanted
to
do,
but,
you
know,
I
work,
so
for
me,
I
work
a
lot
with
data.
And,
you
know,
a
lot
of
it
is
just
exploring,
you
know,
so
I
can't
tell
you
ahead
of
time,
you
know,
well,
I
can
tell
you
ahead
of
time,
this
is
what
I
want
to
try,
you
know,
but
I
can't
tell
you
ahead
of
time,
the
final
piece
that
I'm
going
to
want
to
end
up
with,
it's
going
to
be
this,
this,
this,
and
this,
right,
because
there's
no
way
for
me
to
know
that
because
I
don't
have
enough
experience
with
it.
I
don't
know
enough
about
the
domain
yet,
right,
so
I'm
learning
as
I'm
trying
things
and,
you
know,
in
order
to
try
things,
I
need
to
write
code
to
analyze
the
data,
you
know,
so
that
I
can
learn
from
it
and
decide
what
next
thing
to
do.
It's
kind
of
an
iterative
process,
as
I
think
kind
of
most
software,
you
know,
development,
like
the
business
owner,
you
know,
they
have
a
rough
idea,
you
know,
but
I
think
it's
kind
of
very
similar,
it's
like
you
try
something,
you
know,
and
it
either
works
or
it
kind
of
doesn't
work
as
well
as
you
were
hoping,
but
it
gives
you
another
idea,
so,
you
know,
you
kind
of
then
go
to
the
next
thing.
But
I
think
you
just
described
gender
to
the
AI.
Yeah.
It
just
happens.
That's
what
I
was
thinking.
And
it
takes
us
weeks
or
years
of
lifetime.
Okay,
all
right,
all
right,
all
right.
Yeah,
that's
the
way
I
use
ChatchiBT
now,
right,
I
give
it
the
first
prompt
and
it
will
be
off
by
a
little
bit,
right,
and
then
I
say,
update
this
to
include
these
other
cases.
Yeah.
And
then
it
gives
me
a
revised
version.
And
then
I
look
at
that
and
I
think
about
it
and
I
say,
oh
yeah,
and
don't
you
know
and
also
add
in
this
Give
give
it
to
me
again
So
the
cool
thing
is
it
iterates
really
quickly.
Yeah,
right
You
know
within
five
minutes
I
get
to
like
I
was
writing
personas
the
other
day
for
specific
software
cases,
right
and
within
Ten
minutes
I
had
you
know
six
fully
written
personas
With
all
of
the
different
characteristics
that
I
that
I
think
are
important
Added
in
you
know
and
if
I
had
had
to
sit
down
and
write
that
myself
it
would
have
taken
me
a
couple
hours,
right?
Yeah,
just
writing
text
So
for
me,
that's
the
way
I
use
the
chat
GPT
thing
is
it's
this
iterative
me
giving
it
feedback
I
love
the
conversational
style
of
the
way
that
the
chat
GPT
thing
works
This
is
really
really
exciting
for
me
now.
I
haven't
done
Enough
like
code
stuff
with
it
yet
to
see
but
I
I
can
imagine
say
asking
you
know
give
me
You
know
give
me
a
Java
class
that
will
connect
to
a
database
of
this
URL
and
And
you
know
select
all
of
the
user
records,
right?
Yeah,
and
it
would
give
it
to
me
and
then
and
then
I
would
iterate
Right,
and
it's
gonna
give
me
Java
code
And
I
can
get
to
the
point
where
I
could
then
take
that
and
then
I'm
doing
the
testing
You
know
what
else
is
interesting.
It's
really
good
at
getting
a
job
like
at
a
fan
company,
right?
Like
you
give
it
any
lead
code
problem.
It
will
ace
it.
Yeah,
that
was
clear.
That's
because
there's
a
bunch
of
answers
on
the
internet
right,
right,
but
but
the
point
is
then
You
know
when
we
talking
about
jobs
of
the
future
versus
jobs
of
today,
right?
Like
to
get
into
a
fan
company
at
least
I
have
to
my
age
spend
like
months
and
months
and
months
practicing
for
these
Interviews
that
yeah,
as
soon
as
they
get
higher
that
will
like
never
do
that
work
again
Right,
what
they
will
ask
me
to
do
is
to
specify
the
system
right
right
because
that's
my
that's
my
real
job
But
I
have
to
you
know
so
so
that
like
the
question
is
like
are
we
really
having
like
people?
You
know
doing
the
coding
these
days
are
they
really
having
you
know
dream
jobs?
Do
they
really
really
enjoy
it
or
they
fooling
themselves,
right?
Because
I
think
like
a
lot
of
the
coding
jobs
today
are
still
way
way
too
low
level
and
not
that
much
fun
and
like
a
lot
of
grind
So
that's
that's
my
experience.
I
think
that's
a
really
interesting
point
Like
I
think
the
like
even
the
v2
that
Andy
was
talking
about
some
of
these
chat
GPT
things
Like
I
think
it
really
comes
down
like
as
a
software
engineer
Like
I
need
to
be
really
good
at
using
these
tools
to
like
effectively
do
my
job
and
I'm
not
scared
of
these
tools
replacing
my
job
but
Is
like
a
member
of
like
I
work
on
a
platform
team
and
like
we
have
a
really
wide
array
of
services
I
can
only
focus
on
so
many
things
at
one
time
like
I
need
to
take
these
two
wings
and
like
Be
more
effective
across
the
board
like
all
my
services
are
actually
leveling
up
because
I
know
there's
things
that
I'm
neglecting
There's
things
that
I'm
not
doing
right
like
Level
one
level
two
of
this
stuff
is
really
like
how
do
we
bring
this
in
and
how
do
we
start
using
it
effectively?
And
that's
like
what
I'm
trying
to
understand
and
even
don't
understand
and
even
like
you're
saying
with
some
of
the
like
Iterating
on
some
of
this
stuff
like
how
do
you
really
get
something
from
these
like
open
source
tools
that
you
can
use
internally?
because
internally
you
have
all
these
libraries
you
have
all
these
things
right
that
like
you
have
to
go
in
and
fix
and
go
in
and
do
it
so
like
Is
there
a
way
for
us
to
take
this
and
just
like
easily
fill
in
those
puzzle
pieces
without
having
to
like
Constantly
that
I
think
maybe
the
answer
is
like
you
have
like
an
internal
team
that's
like
feeding
this
data,
right?
Yeah,
but
even
if
you
have
that
team
feeding
this
data
It's
only
gonna
be
as
effective
as
the
person
asking
those
questions
Sure
putting
those
pieces
in
place.
So
yeah,
it
kind
of
goes
back
to
what
Andy
said.
I
actually
have
a
question
Does
anybody
here
hire
people?
right
now
You
hire
people
so
what
is
your
solution
to?
This
problem
where
any
problem
you
give
to
an
engineer
they
can
immediately
solve
it
with
an
AI
or
Is
that
a
problem?
Do
you
say
well?
They
have
you
know,
like
you're
saying
they
have
a
tool
that
they
can
use
to
solve
the
problem
That's
all
I
need
It's
an
interesting
question
for
what
it's
worth.
I
am
not
hiring
or
what
when
I
interview
I'm
interviewing
leaders
Okay,
it's
a
different
skill
set
gotcha,
but
even
when
I
was
hiring
let's
say
I
see
I
I've
always
been
less
interested
in
like
can
you
whatever
solve
the
big
end
problem
or
whatever
it
is
big
Oh
sure
and
more
on
like
what's
your
passion?
And
what
do
you
like
what
are
you
interested
in
and
can
you
solve
any
problem
that's
thrown
at
you?
That's
right.
I'm
not
so
much
worried
about
that
paradigm
shift
the
technical
replacement
I
Would
kind
of
take
objection
with
your
premise.
Okay,
right?
I
mean
I
think
You
know
a
lot
of
like
hard
engineering
problems
I
don't
you
know
There's
many
approaches
like
take
scalability
Right?
You
know,
there's,
you
know,
asking
an
AI,
I
have
not
tried
this,
you
know,
but
I
think
a
lot
of
times,
you
know,
I'd
be
very
surprised
if
AI
could
effectively
solve,
you
know,
like
hardcore
scalability
challenges,
right?
Even
if
it
could
propose,
you
know,
I
mean,
I'm
just
thinking
back
to,
like,
what
Brian
Delasanti,
you
know,
used
to
the
work
he
did,
you
know,
approaches
that
sound,
I
mean,
like
when
we
did
our
product,
you
know,
there
were
generations
of
evolution
of
kind
of
scalability,
you
know,
you
kind
of
do
the,
you
know,
easy,
the
easier
thing
first,
and
get
to
the
max
scalability
on
that,
and
then
you
have
to
go
down
lower
level,
you
know,
and
you
max
out
the
scalability
of
that,
and
then
you
go
down
even
lower,
and
you
kind
of,
you
know,
so
I
mean,
I
mean,
maybe
AI
could
give
you
that
solution,
but,
you
know,
isn't
part
of
the
solution
kind
of
like
this
whole
path?
You
know
what
I'm
saying?
Yeah.
And
maybe
that's
the
specification
thing.
I
don't
know.
And
there's
probably
10
potential
bottlenecks
the
system
like
yours
could
have.
Yeah.
And
it
could
give
you
10
solutions
for
each
of
the
10
bottlenecks,
but
the
question
is
what
bottleneck
are
you
actually
going
to
have?
Right.
Which
of
those
10
is
actually
going
to
work,
and
which
is
achievable
and
within
your
skillset,
and,
and,
and.
That
makes
it
really
tricky.
So
interesting.
So
this
is
old
tech,
and
I'm
a
firm
believer
of
like,
whenever
we
say
the
tech
doesn't
do
that,
that
like
we
will
be
proven
wrong
in
some
point
in
the
future.
However,
at
Netflix,
we
did
some
early,
I'll
say
AI
stuff
with,
you
know,
AI
stuff.
Or
around
looking
for
errors
on
individual
instances.
So
you
deploy
an
app
and
we
watch
it.
And
then
the
idea
was
to
alert
developers
before
the
actual
thing
got
triggered.
So
the
system
would
learn,
okay,
like
when
this
starts,
you
know,
when
whatever
CPU
starts
spiking,
like
alert
now,
not
later.
And
what
we
discovered,
and
this
is
a
project
like
maybe
six
or
seven
years
ago,
is
that
I
was
gonna
say
nine
times
out
of
10,
but
more
than
half
the
time
when
the
alert
would,
when
you
get
the
alert,
Eric,
you'd
be
like,
I
already
know
what
that
is,
and
it's
not,
it's
not
a
problem.
And
so
then
we'd
have
to
go
back
to
the
system
and
say,
okay,
if
it's
this,
don't
do
that.
And
it
became
a
rules
engine
with
lots
of
ifs,
like,
right.
And
so
we
abandoned
it
because
it
was
like,
there
was
no
value
in
this.
And
to
your
point,
they're
like
the
humans
have
at
least
then,
and
I'm
sure
still
today,
like
we
have
this
ability
to
kind
of
infer
like,
that's
not
really
like
the
book
says
that's
a
problem,
but
in
this
system,
it
is
not
a
problem.
We've
seen
this
before,
it's
not
gonna
break.
So
I
do
wonder
about
that
kind
of
paradigm.
Will
this,
you
know,
the
computers
ever
get
to
a
point
where
they
could
reason
kind
of
that.
And
today,
the
answer's
no,
but
I'm
sure
eventually
they
will.
Yeah.
And
on
that
one,
the
whole
ops
space
is
working
really
hard.
Yeah.
Paul,
to
go
back
to
Anton's
question
about
the
interviewing,
right?
So
I
recently
had
to
take
home
tests
for
an
interview
and
I
used
chat
GVT.
Right.
And
then
during
the
interview
review
session,
I
told
them,
I
use
chat
GVT
to
do
this.
Here's
the
problems
I
ran
into.
Here's
how
I
corrected
those.
Right.
And
then
I
published
it.
Right.
So
I
was
just
very
straightforward
with
it.
And
I
thought
to
myself,
if
I
was
interviewing
people,
one
of
the
questions
I
would
ask
them
is
why
didn't
you
use
chat
GVT
to
generate
the
base
code
for
you
now?
Right.
You
know,
why
did
you
spend
eight
hours
writing
base
code
when
you
could
have
gotten
it
in
a
minute?
I
do
have
a
blocker
around
why
I'm
sometimes
reluctant
to
type
stuff
into
chat
GVT.
And
that
is
sometimes
if
I'm
working
on
something
professionally,
I'm
not
sure
where
that
data
is
gonna
go.
Right.
So,
yeah,
am
I
allowed
to
do,
like,
you
know,
so
that's
one
thing.
There's
a
solution
to
that.
Maybe
chat
GVT
has
like
a
paid
version
where
they'll
throw
away
your
data
in
the
future,
perhaps.
Right.
It's
coming
up.
There's
a
$20
subscription
model
that
they're
going
to
introduce
at
the
end
of
this
month.
Yeah.
More
professional.
But
it's
more
interesting.
There
was
nothing
about
throwing
away
your
data.
Well,
I'm
pretty
sure
that
there
is
configurability
of
some
sort.
Like,
you
probably
get
some
kind
of
a
gain
over
the
tool,
but
my,
like,
being
here
in
the
valley
for
a
long
time,
I'm
just
wondering,
you
know,
like
this
whole
relationship
between
the
chat
GPT,
the
parent
company
and
Microsoft,
what
that's
gonna
produce
and
how
is
that
going
to
affect
the
alphabet
in
general?
Like
whether
they're
going
to
be
in
a
world
of
hurt
or
they're
gonna
come
up
with
their
own
set
of
tools.
You're
in
the
search
business
right
now.
You
are
in
a
world
of
hurt.
It's
gonna.
Yeah.
You
can
figure
out
how
to
handle
it.
Yeah.
It's
just
literally
from
Facebook
methods.
Apple
and
everybody
there,
they're
slashing
their
throats
and
there's
nothing
but
blood.
And
then
all
of
the
digital
marketers
that
were
solely
reliant
on
very
accurate
data,
that
data
isn't
there
anymore.
So
the
actual
accuracy
is
just
going
through.
You
lost
me
on
this
accuracy
of
the
data,
digital
marketers?
Well,
because
there
was
a
superfluous
amount
of
data
on
the
users
until
Apple
introduced
the
version
14
of
their
operating
system,
iOS
14,
which
makes
the
user
opt-in
to
let
the
application
know
about
certain
things
about
themselves.
So
the
business
model
for
Facebook
and
Instagram
just
went
directly
into
the
drain.
Now
the
same
goes
with
AdSense
for
the
Google
right
now
because
of
the...
On
iOS.
For
certain
things,
I'd
much
rather
ask
chatGPD
and
pay
20
bucks
and
then
be
done
with
it,
right?
Yeah,
but
I
mean,
Google,
okay,
my
perspective,
and
I'm
not
in
the
valley,
so
I
don't
get
the
talk
and
all
of
that
or
not
part
of
the
grapevine.
But
I
mean,
Google
has
been
more,
in
my
mind,
more
on
the
leadership
side
of
AI
research
and
development
than
open
AI
for
a
very
long
time.
So
the
fact
that
they
don't
have
their
chatGPD
right
now,
in
my
mind
at
least,
is
not
a
capabilities
issue.
They
have
more
of
a
business,
other
issues
that
they've
kind
of
alluded
to,
like
safety.
It's
okay
for
open
AI
to
release
this
because
it's
open
AI
and
if
it
starts
spewing
wrong
information,
as
we
all
know
it
does,
it's
okay
because
it's
open
AI.
If
it's
a
Google
product
and
Google
released
it
and
it's
now
spouting
wrong
information,
that's
not
okay
because
we
trust
Google
to
give
us
correct
information,
right?
So
there's
a
far
greater
potential
detriment
to
Google's
business
model
for
releasing
this,
which
I
think
is
probably
the
reason
why
they
haven't
released
something
like
that
yet
because
the
downside
is
far
greater,
the
risk
is
far
greater
for
them,
not
because
they
don't
have
that
technology.
So
in
my
mind,
I
don't
know
for
a
fact,
but
from
my
understanding
they
are
way
ahead
in
AI
research.
I
don't
know
if
they
are
or
they
aren't
because
essentially
when
you
grow
and
then
the
companies
are
getting
bigger
and
bigger,
you
kind
of
underestimate
what
other
folks
are
doing
and
you're
not
stepping
into
the
market
and
that's
what
happened
with
Yahoo.
They
were
like
the
dominant
force
and
then
MySpace
was
swiped
out
by
Facebook.
I'm
just
saying
that
the
relationship
with
open
AI
and
the
Microsoft
and
talking
about
Bing
being
part
of
it,
I'm
not
worried
about
it.
I
mean,
I
love
the
open
AI
tool
and
then
whether
that's
with
Bing
or
with
whoever
else,
I'm
going
to
be
definitely
using
it,
right?
Yeah.
I
think
too
how
to
point
it
about
that.
But
I'm
just
worried
about
the
Google.
I
wanted
to
respond
directly
to
the
Microsoft
open
AI.
Yesterday,
there
was
an
announcement
that
Google
invested
a
major
stake
in
Anthropic,
which
is
another
large
language
model
company.
So
I'm
actually
not
sure
that
if
it
becomes
a
big
thing,
I
think
that
Google
is
going
to
try
to
be
as
fast
a
follower
as
possible.
But
they're
followers.
Yeah.
What
do
you
mean
by
that?
Yeah.
Like,
if
it
actually
looks
like
Bing's
eating
the
market
share
of
Google,
I
would
expect
that
Google,
even
if
they
have
safety
concerns,
might
be
like,
we
need
to
release
this
in
order
to.
I
think
it's
interesting
because
we're
all
looking
at
Bing.
But
so
Microsoft,
my
understanding
is
that
in
the
last
two
weeks,
10
billion
into
open
AI.
And
Satya,
the
CEO
of
Microsoft,
he's
made
it
very
clear
that
AI
in
general
now
become
part
of
their
entire
tools.
So
to
me,
Bing's
like
V1.
And
it's
like
V2
is
like,
how
are
they
incorporating
it
into
whatever,
I
don't
even
use
Microsoft
products
anymore,
but
like
Teams,
Office.
Yeah,
exactly.
That's
where
it
starts
to
get
super
interesting.
And
you
can
see,
and
I
do
believe
Google
has
some
work
going
on
there
because
you
can
see
it
in
Gmail
when
it
will
complete
your
sentences
for
you.
I
mean,
it's
like
very
like
whatever
pedestrian,
but
it's
like,
this
is
getting
very
interesting.
But
to
your
statement
about
Google
throwing
a
bunch
of
money
into
that,
Anthropic
is
interesting.
There's
first
to
market
and
then
there's
first
in
market.
I
think
open
AI
is
first
to
market
and
like
we're
all
talking
about
cheap,
cheap,
chappy
tea.
The
next
big
thing
in
the
valley
will
be
who
can
actually
capitalize
this
and
ultimately
make
money
off
of
it.
I
had
a
thought
related
to
what
you
said,
PJ,
where
you
would
ask
people.
you
know,
why
didn't
you
use
chat
GPT
for
the
base
code
of
this?
Does
anyone
or
has
anyone
thought
about
working
with
their
teams
and
doing
like
training
sessions
using
these
new
AI
tools?
Cause
I
don't
lead
a
team
right
now.
It's
something
I'm
looking
to
do,
but
I
would
definitely
be
doing
that.
That
would
be
like
a
once
a
week,
let's
refresh
on
the
state
of
the
art
to
make
all
of
our
work
more
efficient.
Can,
can
as
part
of
that,
can
you
go
into
a
little
bit
more
detail
into
like,
what
did
you
have
to
correct?
And
you
know,
what
did
you
have
to,
how
did
you
have
to
massage
what
chat
GPT
came
back
with?
And
do
you
think
that
it's
like
ready
to
be
used?
And
importantly,
did
you
get
the
job?
You
said
it
was
part
of
an
interview.
Oh
yeah.
Yeah.
So,
so
this
was,
this
was
for
the
dev
advocacy
work
that
I'm
doing
with,
with
the
local
base.
So
the
take,
the
take
home
test
for
this
was
write
a
blog
article,
write
a
tutorial
and
create
a
video.
Okay.
So
it
wasn't
due
code,
right?
It
wasn't
code.
Okay.
So
the
blog
article,
I
mainly,
I
took
the
blog
article
from
chat
GPT,
I
put
it
in
a
Grammarly.
And
I
also
put
it
into.
Got
rid
of
the
watermark.
And
then
there's
another
tool,
Hemingway,
I
think
is
another,
another
editing
tool.
Right.
So
I
ran
it
through
bowl,
corrected
some
of
the
things,
made
sure
I
was
happy
with
some
of
the
choices
chat
GPT
had
made.
The
main
thing
I
do
with
chat
GPT
was
I
told
it
to,
for
the
blog
article,
I
said,
massage
the
style
of
it,
like
make
it
cheekier.
Because
I
saw
somebody
on
TikToks,
they
added
that
into
their
prompt
and
then
they
liked
it
better.
So
I
added
it
into
my
prompt
and
I
did
like
it
better.
It
was
like
a
fun
or
a
blog
article.
So
the,
the
tutorial,
it
did
very
poorly.
Yeah.
It
didn't.
It
did
okay,
but
some
of
the
steps
weren't
correct
for
how
to
set
up
liquid
base
and
that's
where
it's
actually
kind
of
like
code.
It's
liquid
base
configuration
files
and
stuff
like
that.
But
it
did,
to
your
point,
gave
a
really
good
outline
for
tutorial.
And
the
steps
you
need
to
take
and,
you
know,
every
iteration
I
did
on
it
was
like
five
or
six
steps.
So
I
landed
on
cool.
I'm
going
to
do
it
in
five
or
six
steps
and
I'll
just
massage
this
a
bit.
And
then
for
the
video,
I
had
it
write
the
script
for
the
video
because
I'd
seen
people
on
TikTok
saying,
I've
got
chat
GPT.
Sorry
to
be
flipping
everybody
off.
I've
got
chat
GPT
writing
my,
my
my
blog
scripts
now.
So
I
had
to
write
a
script
for
me,
which
then
I
just
kind
of
changed
to
be
more
my
style.
And
then
I
didn't
actually
read
that
script
when
I
made
the
video.
I
just,
I
had
it
in
my
mind
and
I,
you
know,
was
kind
of
more
fluid
about
it.
So
that's
how
I
used
it.
I
did
it
enough,
you
know,
it's
a
part
time
consulting
role,
but,
uh,
but
I
did
end
up
getting
the
job
and
they
were,
they
were
impressed
that
I
use
chat
GPT
to
create
this
and
they
were
happy
with
my
answers
about
what
I
changed
and
everything.
But
it
probably
made
you
a
lot
more
efficient
time
wise.
Yeah.
And,
you
know,
so
why
wouldn't
they
be
happy
about
that?
Right.
Yeah.
Right.
You
know,
exactly.
Yeah.
Has
anyone
used
it
for
code
though?
Yes.
Yes.
Oh,
okay.
Well,
then
why
would
you
tell
us?
Well,
uh,
well,
I
was
talking
about
in
terms
of
the
people
management
side
in
terms
of
helping
your
people
learn
how
to
use
the
tool
effectively,
because
the
same
sort
of
skills
that
made
me
good
at
search
in
the
mid
2000s
make
me
good
at
interfacing
with
the
AI
now,
but
not
everyone
has
those
skills
because
people
who
grew
up,
even
my
sister
who's
here,
who's
10
years
younger
than
me,
her
search
has
always
been
good.
She's
always
been
able
to
just
type
in
whatever
and
it's
popped
up.
Whereas
when
I
was
starting
to
search
things,
you
had
to
ask
in
a
specific
way.
And
that's
the
current
state
of
AI.
What
I
was
able
to
do
just
like
PJ,
I've
always
been
interested
in
funnily
enough,
I
had
the
AI
help
me
code
an
AI
in
neat.
That's
something
that's
been
too
technical
for
me.
And
there
was
the
same
sort
of
thing
where
I
asked
it
and
it
said,
here's
the
basic
framework.
It's
missing
a
couple
of
things.
And
I
said,
okay,
great,
rewrite
it
with
those
things.
And
then
it
did
that
and
it
was
like,
you
still
need
this.
And
so,
yeah,
four
or
five
iterations.
And
it
was
able
to
spit
out
working
code.
And
you
pressed
run
and
it
ran.
Yeah.
So
there
is
still
the
skill
set
that
only
works
because
I
have,
even
though
I
haven't
done
a
lot
of
development
work,
I've
still
done
basic
scripting
and
I
have
sort
of
the
background
that
I'm
able
to
read
it
and
check
on
it.
But
that
takes,
that's
a
lot
lower
barrier
to
entry
in
terms
of
hiring
than,
you
know,
a
full
computer
science
degree
or
two
to
three
years
in
QA
testing
or
whatever
it
is.
To
your
question
earlier
about
like,
as
a
people
later,
would
you,
would
you,
would
you,
would
you,
whatever
make
time
for.
to
learn
how
to
do
this.
So
my
default
answer
is
no.
He's
turning
your
question
around
on
you,
man.
Yeah.
Because
it's
not
like
we
make
time
today
for
like,
hey,
you
know,
David,
why
don't
you
spend
some
time
learning
Java?
Like,
it's
assumed
you
can
pick
that
skill
up
on
your
own.
Right.
I'm
not
confident
that
is
the
right
answer
going
forward
with
these
kind
of
mindshifts.
But
right
now,
I
would
not
advocate,
let's
spend
an
hour
a
week
learning
about
chat,
GVT.
I
may
provide
some
context
to
teams
saying,
hey,
this
thing's
pretty
big.
You
might
want
to
look
at
it
on
the
site,
but
like
I
wouldn't
formally.
Because
there's
a
business
we're
trying
to
keep
going.
Sure.
I
would
get
somebody
to
do
a
bunch
and
learn
on
it.
Yeah,
something
like
that.
Yeah.
And
not
necessarily
once
a
week,
but
you
could
do
once
a
month
and
say,
hey,
these
tools
are
immensely
powerful.
And
they're
going
to
keep
getting
more
powerful.
And
for
your
own
sake,
as
my
employees
helping
keep
the
lights
on,
this
will
make
you
better
at
your
job.
So
look
into
it.
That's
more
powerful
than
saying,
hey,
one
hour
a
week,
I
need
you
practicing
on
chat.
Yeah,
no,
that's
not
what
I
meant
or
had
in
mind.
I
mean,
you
could.
The
best
practices
aren't
known
right
now.
They
need
to
be
discovered.
Right.
So
I
could
see
in
a
larger
organization
trying
to
say,
we
have
competitive
advantage
if
we
discover
the
best
practices.
If
we're
sharing
what
we
learn
internally,
we
set
up
a
community
of
practice
or
whatever
you
want
to
call
it
around
how
do
we
do
this?
And
then
we
disseminate
that
information
across
the
organization.
And
that's
not
a
weekly
training
because
there's
no
one
to
teach
it.
So
you've
got
to
open
up
some
opportunity
for
people
to
explore
it
and
share
and
build
that.
So
you
learn
to
write
cheeky.
And
like,
right
now,
TikTok
is
our
best
source.
And
there's
numbers
missing.
Well,
who
said
TikTok
wasn't
there
to
teach?
You
said
there's
no
one
to
teach
it.
But
what
I
would
envision
is
actually
a
meeting
very
much
like
this,
where
once
a
month
people
say,
OK,
hey,
I've
tried
to
use
this.
I've
used
these
tactics
to
massage
the
AI.
And
you
can
build
best
practices
in
your
organization
just
from
having
people
be
aware
of
it.
And
like
you
said,
picking
up
the
skills
over
time.
And
I
think
that'd
be
an
unbelievable
advantage
chain
when
you
can
figure
that
out.
And
I'm
starting
to
use
it
in
my
own
work.
And
so
I
work.
Don't
tell
them.
But
I
work
like
an
hour
a
day
because
I
can
just
use
these
tools
to
help
perform.
Have
you
found
Millennial
Nirvana
then?
I
mean,
I.
You
traveled
the
other
seven
hours.
I
took.
Well,
no,
I
don't
travel.
I
can
tell
you,
I
have
kids
that
are,
I
have
twins
that
are
26,
going
27.
And
I
have
a
35-year-old.
And
they're
all
kind
of
in
computer
businesses.
And
I'm
hugely
impressed
how
hard
they
work.
And
everybody
says
that
they're
lazy
and
such.
But
no,
I'm
just
joking.
This
generation
says
that
the
future
generation
is.
Yeah,
no,
I
think
they're
actually
working
much
harder.
No,
we're
all
in
the
same
building
downhill
for
the
last
5,000
years.
Yeah.
So
no,
they're
good
people.
Are
you
about
to
say
we're
at
time?
No,
no,
no.
I
want
to
throw
one
more.
Final
thought.
OK,
and
then
it's
11.
So
I
would
like
maybe
we'll
continue
this
on
the
hallway.
But
I'd
like
to
ask
of
you,
who
is
interested
in
the
idea
of
a
sovereign
digital
twin,
which
is
AI
that
will
basically
work
on
my
behalf,
right?
Digital
twin
of
me.
Sounds
like
slavery
to
me,
man.
Robot
rights.
Hold
on,
Marsha.
Let
him
finish.
I'm
curious
where
he's
going
with
that.
But
one
that
I
control,
because
like
to
Tim's
point,
right
now
I'm
also
very
about
talking
to
ChedGPT
because
it's
building
a
model
of
me,
right?
I
don't
know
what
it's
going
to
do
with
it.
I
would
rather
have
this
be
a
model
that
I
control.
Yeah,
and
obviously
solar
powered
and
the
GPU
chips.
Right.
So
that's
what
I'm
investigating
right
now.
That's
a
great
idea,
by
the
way.
It's
a
great
idea.
But
then
you
would
probably
have
to
have
a
bigger
brother
to
pay
the
taxes
on
yourself
as
well,
because
essentially
you
will
not
be
able
to
sustain
the
amount
of
computing
power
that
we
have
today
in
order
to
even
fractionally
represent
you.
And
actually,
I
just
saw
the
recent
talk
about
talking
about
the
human
brain
and
some
kind
of
a
weird
theory
for
which
Steve
Wojnar
is
great
for.
He's
kind
of
off
the
wall.
But
he
said
on
a
neurological
level,
like
the
human
body,
human
brain,
we
figured
out
all
of
the
things
where
everything
is.
But
he
said,
I
could
not
find,
nobody
can
really
find
the
memory,
right?
And
he
said,
when
the
kids
are
seven
year
old,
they
lose
their
teeth,
and
they
lose
the
health
of
the
childhood
memories.
And
he
said,
maybe
the
teeth
are
the.
All
right,
let's
check
out
the
board.