It’s also worth considering the scope of intended and realised meanings in a work, and whether or not we’d consider that dynamic as one worth pursuing in the future.
Kind of like the old joke about English teachers looking too deeply into classic literature, isn’t it? I might not intend to relate and extend layers of context with the line, “John sat squarely on the desk,” but a reader observing that line as part of a surrounding text could infer meaning and purpose beyond what I originally intended.
With AI-generated text, that dance of writers filling (or deliberately avoiding) their work with meaning, and readers lifting the meaning out of the work, becomes much less human in and of itself. Or does it? Will schools of the future choose AI-written stories and film for students to learn from, over material written by people?
Kind of like the old joke about English teachers looking too deeply into classic literature, isn’t it? I might not intend to relate and extend layers of context with the line, “John sat squarely on the desk,” but a reader observing that line as part of a surrounding text could infer meaning and purpose beyond what I originally intended.
With AI-generated text, that dance of writers filling (or deliberately avoiding) their work with meaning, and readers lifting the meaning out of the work, becomes much less human in and of itself. Or does it? Will schools of the future choose AI-written stories and film for students to learn from, over material written by people?