There is so much stuffing for a simple idea that I'm not sure if this piece deserves its own title, but I'll give it the benefit of the doubt.
One thing that I wonder though is how we will draw the line. If I'm writing a piece and do a Google search, and in that way invoke BERT under the hood, is anything that I write afterwards "AI-tainted"? What about the grammar checker? Or the spot removal tool in photoshop or gimp? Or the AI voice that reads back to me my own article so that I can find prose issues?
And that brings the other problem: do the general public really know the extent of AI use today, never mind in the future?
With all of that out of the way, yes, I would rather read text produced by human beings, not because of its quality--the AI knows, sometimes humans can't help themselves and just keep writing the same thing over and over, specially when it comes to fiction--but just to defend human dominance.
>What about the grammar checker? Or the spot removal tool in photoshop or gimp? Or the AI voice that reads back to me my own article so that I can find prose issues?
We could get this whole discussion back to some semblance of sanity if we stopped calling any form of remotely complicated automation "AI". The term might as well be meaningless now.
Nothing about any of all these "AIs" is intelligent in the sense of the layman's understanding of artificial intelligence, let alone intelligence of biological and philosophical schools of thought.
> There is so much stuffing for a simple idea that I'm not sure if this piece deserves its own title, but I'll give it the benefit of the doubt.
Frankly I had the same thought writing it :D
It's more of a stake in the ground sort of a thing I guess?
What I really want is somebody saying "hey, there is an open standard already here" so I can use it.
The idea has some legs, but they are weak for the many reasons pointed
out to me by fair criticism of "digital veganism". The main one is
that labelling is one small part of quality. Tijmen Schep in his 2016
"Design My Privacy" [1] proposed some really cool ideas around quality
and trustworthiness labelling of IoT/mobile devices, but ran into the
same issues. Responsibility ultimately lies with the consumer, and so
long as consumers remain uneducated as to why low quality is harmful,
and cannot verify the provenance of what they consume or the harmful
effects, nothing will change.
Right now we seem to be at the stage of "It's just McDonald's/KFC for
data - junk food is convenient, cheap and not a problem - therefore
mass production generative content won't be a problem".
The food analogy is powerful, but has limits, and I urge you to dig
into Digital Vegan [2] if you want to take it further.
>One thing that I wonder though is how we will draw the line. If I'm writing a piece and do a Google search, and in that way invoke BERT under the hood, is anything that I write afterwards "AI-tainted"? What about the grammar checker? Or the spot removal tool in photoshop or gimp? Or the AI voice that reads back to me my own article so that I can find prose issues?
>And that brings the other problem: do the general public really know the extent of AI use today, never mind in the future?
The line is drawn at human ownership/responsibility. A piece of content can be 'AI tainted' or '100% produced by AI', what makes the difference is if a human takes the responsibility of the end product or not.
Responsibility and ownership always lies with the humans. Even supposedly 100% AI generated content is still coming from a process started and maintained by humans. Currently also prompted by a human.
The humans running those processes can attempt to deny ownership or responsibility if they so choose but whenever it matters such as in law or any other arena dealing with liability or ownership rights, the humans will be made to own the responsibility.
Same as for self-driving cars. We can debate about who the operator is and to what extent the manufacturers, the occupants, or the owners are responsible for whether the car causes harm but we'll never try to punish the car while calling all humans involved blameless. The point of holding people responsible for outcomes and actions is to drive meaningful change in human behaviors in order to reduce harms and encourage human flourishing.
In terms of ownership and intellectual property, again the point of even having rules is to manage interactions between humans so we can behave civilly towards each other. There can be no meaningful category of content produced "100%" by AI unless AI become persons under the law or as considered by most humans.
If an AI system can ever truly produce content on its own volition, without any human action taken to make that specific thing happen, then that system would be a rational actor on par with other persons and we'll probably begin the debate over whether AI systems should be treated as people in society and under the law. That may even be a new category distinct from human persons such as it is with the concept of corporate persons.
> ... yes, I would rather read text produced by human beings, not because of its quality ... (snip) ... but just to defend human dominance.
One could make a strong argument that defending moral principles is preferable to preferring the underlying creative force to have a particular biological composition.
As an example, I don't want a system to incentivize humans kept as almost-slaves to retype AI generated content.
How can one tell the difference between all the gradations of "completely" human generated to not?
One thing that I wonder though is how we will draw the line. If I'm writing a piece and do a Google search, and in that way invoke BERT under the hood, is anything that I write afterwards "AI-tainted"? What about the grammar checker? Or the spot removal tool in photoshop or gimp? Or the AI voice that reads back to me my own article so that I can find prose issues?
And that brings the other problem: do the general public really know the extent of AI use today, never mind in the future?
With all of that out of the way, yes, I would rather read text produced by human beings, not because of its quality--the AI knows, sometimes humans can't help themselves and just keep writing the same thing over and over, specially when it comes to fiction--but just to defend human dominance.